Structure fusion based on graph convolutional networks for semi-supervised classification

07/02/2019 ∙ by Guangfeng Lin, et al. ∙ 0

Suffering from the multi-view data diversity and complexity for semi-supervised classification, most of existing graph convolutional networks focus on the networks architecture construction or the salient graph structure preservation, and ignore the the complete graph structure for semi-supervised classification contribution. To mine the more complete distribution structure from multi-view data with the consideration of the specificity and the commonality, we propose structure fusion based on graph convolutional networks (SF-GCN) for improving the performance of semi-supervised classification. SF-GCN can not only retain the special characteristic of each view data by spectral embedding, but also capture the common style of multi-view data by distance metric between multi-graph structures. Suppose the linear relationship between multi-graph structures, we can construct the optimization function of structure fusion model by balancing the specificity loss and the commonality loss. By solving this function, we can simultaneously obtain the fusion spectral embedding from the multi-view data and the fusion structure as adjacent matrix to input graph convolutional networks for semi-supervised classification. Experiments demonstrate that the performance of SF-GCN outperforms that of the state of the arts on three challenging datasets, which are Cora,Citeseer and Pubmed in citation networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As a efficient representation of data distribution, graph plays a important role for describing the intrinsic structure of data. Therefore, many existing works have constructed the significant theory and method depending on the graph structure of data in pattern recognition, such as graph cut building energy function for semantic segmentation task

veksler2019efficient , graph-based learning system constructing the accurate recommendations for the interaction of the different objects monti2017geometric ying2018graph , graph modeling molecules bioactivity for drug discovery defferrard2016convolutional gilmer2017neural and graph simulating the link connection of citation network for the different group classification defferrard2016convolutional gilmer2017neural khan2019multi . In fact, we usually observe objects and their relationship (this relationship is defined as the objects of structure, which often can be described by graph.) from multiple views, which provide the more abundant and complete information for object recognition. Learning on multi-graph (multiple observation structure) can effectively mine multiple relationship to discriminate the different object.

Figure 1: The diagram of structure fusion based on graph convolutional networks (SF-GCN), in which three graphs indicating the structure of the multi-view data and eight nodes (the different color connecting lines mean the various connecting weights) expressing the multi-node in these graphs; showing the linear coefficient between multi-graph structure for complementary fusion.

Existing learning methods on multi-graph trend to tow ways. One is structure fusion Lin20131286 Lin2014146 7268821 7301305 Lin20161 Lin2017275 Lin2017Dynamic Lin2018structure lin2018class LINGF2018 lin2019transfer

or diffusion on tensor product graph

yang2011affinity yang2012affinity bai2019automatic li2019semi bai2017regularized bai2017ensemble based on the complete data, which include each view observation data. Another is graph convolutional networks for the salient graph structure preservation khan2019multi based on the incomplete data, which lost some view observation data. For example, link relationship can be extracted by application necessary in citation networks, but it can not be described by the corresponding observation data computation. In other words, these link relationship exists, while the corresponding support data lost. Therefore, the method based on graph convolutional networks usually ignores the complete complementary of the different observation structure based on the incomplete multi-view data. To analysis this issue, we attempt to construct structure fusion based on graph convolutional networks for classification. Figure 1 shows the overall flow diagram of structure fusion based on graph convolutional networks (SF-GCN). The inspiration of SF-GCN comes from Multi-GCN in the literature khan2019multi , but there are tow points difference comparison with Multi-GCN. One is that SF-GCN considers the inequality of multiple structures, while Multi-GCN only equally deal with their relationship. The other is that SF-GCN focuses on the contributions of all nodes structure in the fusion structure, while Multi-GCN only emphasises on the salient structure of the part nodes. From the classification sense,the strong and weak links between nodes both considered for complementing structure can more fit to the intrinsic structure of the data for classification.

Our contributions can be summarized as following. (a)We present a novel structure fusion based on graph convolutional networks (SF-GCN) that discriminates the different classes by optimizing the linear relationship of multiple observation structure with balancing the specificity loss and the commonality loss. (b) In three citation datasets with document sparse feature and document link relationship, the proposed SF-GCN outperforms the state of the arts for semi-supervised classification. (c) Our model is generalized the different multi-graph fusion methods for evaluating the performance of the proposed SF-GCN.

2 Related Works

In this section, we mainly review recent related works about structure fusion and graph neural network.

2.1 Structure fusion

Structure fusion initially proposed in Lin20131286 can merge multiple structures for shape classification. In the follow-up works, the extend methods can be divided into three categories according to the different fusion ways. The first kind of methods try to find the optimized linear relationship of multiple observation structure based on the different manifold learning method Lin2014146 or statistics model analysis7268821 . The second kind of methods attempt to mine the nonlinear relationship of heterogeneous feature structure based on the global feature7301305 Lin20161 or the local feature encoding Lin2017275 . The third kind of methods can capture the dynamic changes of multiple structures for semi-supervised classification Lin2017Dynamic or the structure propagation way for zero-shot learningLin2018structure lin2018class LINGF2018 lin2019transfer .

From above mention, existing methods emphasis on the completeness of data and their relationship based on data project, while graph convolutional networks focus on the transformation and evolution of data structure by deep learning frameworks. Therefore, we expect to draw support from structure fusion based on structure metric and graph convolutional networks for processing the incomplete view data, and find evolution law of the the fusion structure with the consideration of their specificity and commonality.

2.2 Graph neural network

Graph neural networks can discover the potential data relationship by the computation based on graph nodes and links. Especially, the computation is defined as convolution for graph data, and graph convolution networks (GCN) have become a promising direction in pattern recognition. In terms of the different node representation, graph convolution networks include spectral-based GCN and spatial-based GCN. Spectral-based GCN can define graph Fourier transform based on graph Laplacian matrix for projecting graph signal into the orthonormal space. The difference of these methods is the selection of the filter, which may be the learned parameters set

bruna2013spectral , Chebyshev polynomial defferrard2016convolutional , or the first-order Chebyshev polynomialkipf2016semi chen2018fastgcn . Spatial-based GCN regards images as a special graph with a pixel describing a node. To avoid the storage of all states, these methods have present the improved training strategies, such as sub-graph training hamilton2017inductive or stochastically asynchronous training dai2018learning . Furthermore, some complex networks architecture can utilize gating unite to control the selection of node neighborhoodliu2018geniepath , or design two graph convolution networks with the consideration of the local and global consistency on graphzhuang2018dual , or adjust the receptive field of node on graph by hyper-parameters van2018filter .

Because spectral-based GCN can explicitly construct the learning model on the graph structure,which can easily be separated from GCN architecture. Therefore, this point provides a way for processing multiple structures, which may be incremental. In this paper, we focus on the important role of graph (structure) from multi-view data, and attempt to mine the plentiful information from multiple structures for spectral-based GCN inputting.

3 Structure fusion based on graph convolutional networks

Figure 2: The mechanism of structure fusion in SF-GCN.

To the best of our knowledge,existing structure fusion methods usually construct the optimizing function for feature projection, in which feature data and the corresponding structure jointly participate in computation. Because of the possible loss and the structure preservation of multi-view data, we expect to build a novel structure fusion by structure metric, in which the optimizing function only involves multiple structures for avoiding the negative effect of the data lost. Simultaneously, multiple structures have each specificity and their commonality. Therefore,we also anticipate that a novel structure fusion can be constrained by these characteristics of multiple structures. Figure 2 demonstrates the internal mechanism of structure fusion in SF-GCN. First, we construct the specificity loss based on spectral embedding method with the consideration of multiple structure linear relationship. Second, we measure the commonality loss between multiple structures based on distance metric in Grassman manifold. Finally, we jointly exploit the structure fusion based on two losses, and input GCN for classification.

3.1 Specificity loss of multiple structures

Given an object set with multi-view, we can use graph to describe the observation distribution of data on each view. Therefore, the graph is the representation of the observation structure and can indicate multiple structures of data from multi-view observations. Because multiple structures detail the same object set, each includes the same vertex set , or the possible different edges set . If is the adjacency matrix of and is the numerical expression of the structure in th view. In terms of spectral embedding, we can obtain the following optimization function on the embedding matrix ( is the number of samples, and is the dimension of embedding space) of each view.

(1)

Where, is Laplacian matrix of , is the degree matrix for . Therefore, can still describe the characteristic of structure on graph . We can compute the embedding matrix by optimizing equation (1

), which is equivalent to a eigenvalue solution problem. When all eigenvalues are solved, eigenvectors corresponding to the smallest eigenvalues can build the embedding matrix

, which can project the original nodes into the low dimensional spectral space xia2010multiview . We can regard as the specificity loss of structure on graph graph , and then we can reformulate the specificity loss of multiple structures as follow.

(2)

Where, is the embedding matrix of multiple structures in graph and closely approximates . Suppose fusion structure is the linear combination of , then and have the same linear relationship , in which is the linear coefficient to encode the importance of multiple structures.

3.2 Commonality loss of multiple structures

To measure the commonality loss of multiple structures, we need metric the distance between Laplacian matrix and . According to the solvation of the equation (1), we can obtain the equation (3) for describing the internal connection between embedding matrix and the corresponding Laplacian matrix .

(3)

Where, is diagonal matrix, in which diagonal values is the smallest eigenvalues of .

can be explained a subspace for preserving the smaller variance of the column in

, that is for reserving the bigger variance of the column in the structure . In other words, can keep the more discrimination of the data. Similarly, has the same sense in multiple observation structure. Therefore, we can replace the distance between Laplacian matrix and by the distance between and for indirectly computing the commonality loss of multiple structures. This point is consistent with the specificity loss of the assumption, which is that approximates between each view and multi-view.

In terms of Grassmann manifold theory lin2012multi turaga2011statistical , the orthonormal matrix can be regard as the column of spanning an unique subspace, which can be project into an unique point on Grassmann manifold . Similarly, also can be mapped into an unique point on this Grassmann manifold. Therefore, the principle angles between these subspaces can represent the distance between and . Furthermore, this distance can be reformulate as followingdong2013clustering .

(4)

In multiple structures, we can define the commonality loss as the distance between and as following.

(5)

3.3 Structure fusion by structure metric losses

As two structure metric losses, specificity loss can balance the contribution of the structure in each view, while commonality loss can consider the similarity of multiple structures in multi-view. These structure metric losses can both constrain the linear relationship of multiple structures. Therefore, we combine these structure metric losses as a total loss for encoding the importance of multiple structures. The total loss can be reformulated as following.

(6)

Where, is regularization parameter. From equation (6), we can construct the object optimization function as following.

(7)

In commonality loss, constant term can not influence the loss trend change, so we may remove this term for conveniently computing. Equation (7) is reformulated as equation (8) for balancing parameter between and .

(8)

Equation (8) is a nonconvex optimization problem, we can solve this problem by and alternated optimization. If is fixed, equation (8) can be transformed as a eigenvalue solving problem as following.

(9)

Equation (9)is equivalent to a eigenvalue solution problem. When all eigenvalues of are solved, eigenvectors corresponding to the smallest eigenvalues can build the fusion embedding matrix . If is fixed, equation (8) can be converted into a quadratic programming problem as following.

(10)

By alternated solving between equation (9) and equation (10), we can obtain fusion embedding matrix and the linear relationship of multiple structures. Furthermore, fusion structure (fusion adjacent matrix) can be computed by .

Algorithm 1 shows the pseudo code for fusion structure of multiple structures. In this algorithm, there are three steps. The first step (line 1) initializes the linear relationship of multiple structures. The second step(from line 2 to line 3) computes the Laplacian matrix and the spectral embedding in each view. The third step (from line 4 to line 6) alternately optimizes the spectral embedding fusion and the linear relationship of multiple structures. The last step (line 8) calculates fusion structure by the linear combination of each structure. Therefore, the complexity of this algorithm is , in which represents multi-view; is the sample number; is the dimension of the selected eigenvectors; is the iterative times of optimization; is the number of bits in the input of algorithm.

0:   : adjacency matrices of graph ; : regularization parameter of the total loss; : the iteration times
0:  : fusion structure of multiple structures
1:  Initializing the linear relationship of multiple structures and regularization parameter
2:  Computing Laplacian matrix of
3:  Computing the spectral embedding of structure in each view by equation (1)
4:  for  do
5:     Computing the spectral embedding fusion of multiple structures in multi-view by equation (9)
6:     Updating the linear relationship of multiple structures by equation (10)
7:  end for
8:  Computing fusion structure by
Algorithm 1 Fusion structure of multiple structures

3.4 Graph convolutional networks

In terms of the multiplication of convolution in the Fourier domain, graph convolution is defined as the the multiplication between the signal and the filter bruna2013spectral . Furtherly, graph convolution can also be approximated by -order Chebyshev polynomials kipf2016semi as following.

(11)

Where, is the eigen-decomposition of the normalized Laplacian (

is the identity matrix;

is the degree matrix of graph ); ( and respectively are the rescaled degree and adjacent matrix by ); expresses the Chebyshev polynomials; .

Fusion structure can directly be input into the above graph convolutional networks. The forward propagation based on two layers of graph convolutional networks can be indicated as following.

(12)

Where, is the output of networks; is the representation matrix of each nodes; and respectively are the and layer filter parameters; and

are the different type of activation function located in the various layers.

4 Experiments

For evaluating the proposed SF-GCN, we carry out the experiments from four aspects. Firstly, we conduct the comparing experiment between the proposed SF-GCN and the baseline methods, which include graph convolutional networks (GCN)kipf2016semi with the combination view and Multi-GCNkhan2019multi . Secondly,we utilize the different multi-graph fusion methods for analyzing the intrinsic mechanism of the proposed SF-GCN. Thirdly, we show the experimental results between the proposed SF-GCN and the state of the art methods for the node classification in citation networks. Finally, we implement the proposed SF-GCN method of the lost structure for demonstrating the importance of the complete structure.

4.1 Datasets

We use the paper-citation networks of the citation networks in experiments. The three popular datasets usually utilized in node classification respectively are Cora, Citeseer and Pubmed. Cora dataset has classes that involve

the grouped publication about machine learning and their undirected graph. Citeseer dataset includes

classes that have

scientific papers and their undirected graph. In these datasets, each publication stands for a node of the related graph and is represented by one-hot vector, each element of which can indicate the presence and absence state of a word in the learned directory. Pubmed dataset has

classes that contain diabetes-related publications and their undirected graph. In this dataset, each paper (each node of the related graph) can be described by a term frequency-inverse document frequency (TF-IDF)wu2019comprehensive . Table 1

shows the statistics of these datasets. To obtain the structure of the second view from publication description, we normalize the cosine similarity between these publication. If these similarity is greater than

, we produce an edge for the corresponding to nodes in the citation network. This configuration is same in the literature khan2019multi .

Datasets
Nodes
number
Edges
number
Classes
number
Feature
dimension
Label
rate
Cora
Citeseer
Pubmed
Table 1: Three datasets statistics in citation networks.

4.2 Experimental configuration

In experiments, we follow the configuration in GCNkipf2016semi , in which we train a two-layer GCN for maximum of epochs and test model in labeled samples. Moreover, we select the same validation set of labeled sample for hyper-parameter optimization (dropout rate for all layers, number of hidden units and learning rate).

In proposed SF-GCN, we initially set the linear relationship of multiple structures and regularization parameter as , and then update these parameters in iteration optimization. The iteration time of the algorithm is according it’s the convergence degree in fact.

4.3 Comparison with the base-line methods

The proposed method (SF-GCN) can be constructed based on GCNkipf2016semi , and attempt to mine the different structure information for completing the intrinsic structure in multi-view data. Therefore, two base-line methods (GCN and Multi-GCN can find and capture the different structure information from the different consideration.) is involved for processing multi-view data based on GCN. GCN for multi-viewkipf2016semi can concatenate the different structure to build a sparse block-diagonal matrix where each block corresponding to the different structure (the adjacent matrix of different graph). Multi-GCN khan2019multi can preserve the significant structure of the different structure by manifold ranking. In contrast with these base-line methods, the proposed method (SF-GCN) can not only enhance the common structure, but also retain the specific structure by structure fusion.

Table 2 shows that the classification performance of SF-GCN outperforms that of the base-line methods and the least improvement of SF-GCN respectively is for Cora, for Citeseer and for PubMed. However, GCN for multi-view is not superior to GCN for single-view, and it demonstrates that information mining of multi-view data is a key point for node classification. Therefore, SF-GCN attempt to mine the structure information from multi-view data for this purpose and obtain the better performance.

Method Cora Citeseer PubMed
GCN kipf2016semi for view1
GCN kipf2016semi for veiw2
GCN kipf2016semi for multi-view
Multi-GCN khan2019multi NA
SF-GCN 83.3 73.4 79.3
Table 2: Accuracy comparison of SF-GCN method with the base-line methods for node classification in citation network. View1 stands for graph structure from the original dataset, while view2 indicates graph structure from the cosine similarity of node representation

4.4 structure fusion generalization

Structure fusion (SF) focuses on the complementation of the distribution structure from the different view data, and can be defined in section 3.3. However, the diffusion yang2012affinity bai2017regularized bai2017regularized and propagationLINGF2018 Lin2018structure of the different structure can also describe the complex relationship of the various structure, and become the important part of structure fusion. Therefore, we can define fusion structure by the propagation fusion (PF) of the different structure as follow.

(13)

The propagation fusion can exchange and interact the relationship information between the various structures, and mine the neighbour relationship of multiple structures. However,this propagation can effect on the clustering performance of the original structure by high-order iteration multiplication. Therefore, we only consider zero-order (for example SF) and first-order (for instance PF) multiplication, that is structure propagation fusion (SPF) as follow.

(14)

For evaluating structure fusion generalization, we compare structure fusion based graph convolutional networks (SF-GCN), propagation fusion based graph convolutional networks (PF-GCN) and structure propagation fusion based graph convolutional networks (SPF-GCN).In Table 3, we observe that the performance of SPF-GCN is better than that of other method, and the least improvement of SPF-GCN respectively is for Cora, for Citeseer and for PubMed, while the performance of SP is superior to that of PF-GCN, and the improvement of SF-GCN respectively is for Cora, for Citeseer and for PubMed Therefore, PF and SF both are benefit for further mining the structure information and the role of SF is more important than that of PF.

Method Cora Citeseer PubMed
SF-GCN
PF-GCN
SPF-GCN 83.5 73.5 80.0
Table 3: Structure fusion generalization classification accuracy in three methods, which are structure fusion based graph convolutional networks (SF-GCN), propagation fusion based graph convolutional networks (PF-GCN) and structure propagation fusion based graph convolutional networks (SPF-GCN)

4.5 Comparison with the state-of-the-arts

Because graph convolutional networks and structure fusion are basic ideas for constructing the proposed method SPF-GCN, we analyze six related state-of-the-arts methods for evaluating SPF-GCN. These methods include two categories. One is node neighbour information exploiting for GCN, and another is node information fusion based on GCN.

Node neighbour information exploiting attempts to capture the distribution structure of the node neighbour for obtain the stable graph structure representation. For example, graph attention networks(GAT) can specify different weights to different nodes in a neighborhood Veli2017Graph ; stochastic training of graph convolutional networks (StoGCN) allows sampling an arbitrarily small neighbor size 2017arXiv171010568C ; deep graph infomax(DGI) can maximize mutual information between different level subgraph centered around nodes of interest (the different way for considering neighbour information)Veli2018Deep .

Node information fusion tries to mine the information from multi-view node description or multiple structures for complementing the difference of multi-view data. For instance, large-scale learnable graph convolutional networks (LGCN) can fuse neighbouring nodes feature by ranking selection to transform graph data into grid-like structures in 1-D format2018arXiv180803965G ; dual graph convolutional networks (DGCN) can consider local and global consistency for fusion different views graph of raw datazhuang2018dual ; Multi-GCN can extract and select the significant structure form multi-view structure by manifold rankingkhan2019multi .

The proposed method SPF-GCN belongs to node information fusion method, and the difference compared with the above methods focuses on the complementary of multiple structures by mining their commonality, specificity and interactive propagation. Table 4 shows SPF-GCN outperforms other state-of-the-art methods except DGCN in Cora and PubMed datasets. Although SPF-GCN and DGCN reach to the same performance in Cora and PubMed datasets, SPF-GCN can preserve the higher computation efficient of the original GCN because of the separable computation between structure fusion and GCN.

Method Cora Citeseer PubMed
GATVeli2017Graph
StoGCN2017arXiv171010568C
DGIVeli2018Deep
LGCN2018arXiv180803965G
DGCNzhuang2018dual 83.5 80.0
Multi-GCNkhan2019multi NA
SF-GCN
SPF-GCN 83.5 73.5 80.0
Table 4: Accuracy comparison of SF-GCN and SPF-GCN with state-of-the-art methods for node classification in citation network.

4.6 Incomplete structure influence

Structure fusion can capture the complementary information of multiple structures, and this complementary information can supply an efficient way for incomplete structure influence. The main reason of the incomplete structure may be because of noise and data loss in practical situation. For evaluating the performance of the proposed methods under the condition of the incomplete structure, we design a experiment in all datasets. In semi-supervised classification, the distribution structure of test datasets is more important than that of train datasets, and can assure the performance of classification because of the transfer relation of structure between train and test datasets. Therefore, we delete the some structure of test datasets to destroy this transfer relation for simulating incomplete structure.

In the details, we proportionally set the adjacency matrix(graph structure from the original dataset) of elements (corresponding to test datasets) to zero from to , and then respectively implement GCN for multi-view kipf2016semi , DGCN zhuang2018dual , SF-GCN and SPF-GACN methods in all dataset. In figure 3, we select structure loss degree from ,,,,, to construct the different classification model for evaluating the performance of the compared methods. Especially, there is the smaller descent of SPF-GCN classification accuracy with structure loss increasing from to , e.g. to on Cora, to on Citeseer and to on PubMed. We can observe that the proposed SF-GCN and SPF-GACN is more stable and robust with incomplete degree increasing of structure than GCN for multi-view and DGCN. In this situation, the performance of SPF-GACN is better than that of SF-GACN, while the performance of GCN outperforms that of DGCN in Cora datasets, and the performance of DGCN is superior to that of GCN in Citeseer and PubMed datasets. The details of this reason can be analyzed in section 4.7.

Figure 3: Impact of structure loss on classification accuracy for citation networks on (a) Cora,(b)Citeseer and (c)PubMed datasets.

4.7 Experimental results analysis

In our experiments, we compare the proposed method with eight methods, which contain two kinds of base-line methods (Multi-GCNkhan2019multi , GCN kipf2016semi for multi-view, GCN kipf2016semi for view1 and view2 in section 4.3), two kinds of structure fusion generalization methods (PF-GCN and SF-GCN in section 4.4), and six kinds of the state-of-the-art methods(GATVeli2017Graph , StoGCN2017arXiv171010568C , DGIVeli2018Deep , LGCN2018arXiv180803965G , DGCNzhuang2018dual and Multi-GCNkhan2019multi in section 4.5). These methods can utilize the graph structure mining based graph convolutional networks for semi-supervised classification by the different ways. In contrast to other methods, the proposed SF-GCN and SPF-GCN methods focus on the complementary relationship of multiple structures by the consideration of their commonality and specificity. Moreover, the proposed SPF-GCN method not only capture the optimization distribution of fusion structure, but also emphasize on the interactive propagation between the different structures. From the above experiments, we can observe several points as following.

  • The performance of SF-GCN is superior to that of the base-line methods (Multi-GCNkhan2019multi , GCN kipf2016semi for multi-view, GCN kipf2016semi for view1 and view2 in section 4.3). GCN kipf2016semi constructs a general graph convolutional architecture by the first-order approximation of spectral graph convolutions for greatly improving the computation efficiency of graph convolutional networks, and also provides a feasible deep mining frameworks for effective semi-supervised classification. For using multiple structures, GCN for multi-view can input a sparse block-diagonal matrix, each block of which corresponding to the different structure. Therefore, the relationship of each block (the different structure) is ignored for GCN, and this point leads to the poor performance (in some times, the performance of GCN for multi-view is worse than that of GCN for view1) of GCN for multi-view. In contrast, Multi-GCNkhan2019multi can capture the relationship of the different structure to preserve the significant structure of merging subspace. However, Multi-GCNkhan2019multi neglects the optimizing fusion relationship of the different structure, while the proposed SF-GCN focuses on finding these relationship by jointly considering the commonality and specificity loss of multiple structure for obtaining the better performance of semi-supervised classification.

  • SPF-GCN shows the best performance in structure fusion generalization experiments, whereas the performance of SF-GCN is better than that of SP-GCN. The main reason is that SF-GCN emphasises on the complement information by the optimizing fusion relationship of the different structure, while SP-GCN trends to the interactive propagation by the diffusion influence between the different structures. The complement fusion play the more important role than the interactive propagation because of the specificity structure of individual view data, but both fusion and propagation can contribute the multiple structures mining for enhancing the the performance of semi-supervised classification.

  • The performance improvement of SPF-GCN compared with six kinds of the state-of-the-art methods is respectively different. The similar performance of SPF-GCN is shown in the comparison with LGCN and DGCN in Cora, DGCN in PubMed. Except these situation, the better improvement of SPF-GCN can be demonstrated in other methods. The main reason is that LGCN can emphasis on neighboring nodes feature fusion for the stable node representation and DGCN can correlate the local and global consistency for complementing the different structures. The proposed SPF-GCN expects not only to capture the structure commonality for complementing the different information, but also to preserve the structure specificity for mining the discriminative information. Therefore, the proposed SPF-GCN can improve the classification performance in the most experiments. In the least, the proposed SPF-GCN have the similar performance than the best performance of other method in all experiments. In addition, the proposed SPF-GCN is based on GCN frameworks, so it has the efficient implementation like GCN. In experiments, the computation efficiency of the proposed SPF-GCN is the highest than that of the state-of-the-art methods (the details of the computation efficiency in section 3.2).

  • Structure shows the distribution of data, and is very important for learning GCN model. Incomplete structure can evaluate the robustness of the related GCN model. We select the classical GCN, the state-of-the-art DGCN, SF-GCN and SPF-GCN for the robust test. The proposed SPF-GCN shows the best performance in three datasets. In Cora, the performance of GCN is better than DGCN,while the performance of GCN is worse than DGCN in Citeseer and PubMed. It shows that local and global consistency for fusing graph information in DGCN trend to the unstable characteristic because of the tight constraint of incomplete structure consistency. The loose constraint of GCN for incomplete structure correlation leads to the worse performance. The proposed SPF-GCN can compromise these constrains for balancing the incomplete structure information by optimizing the weight of multiple structures, and also connect the different structure for complementing the different information. Therefore, the proposed SPF-GCN obtains the best performance in experiments.

  • The proposed SPF-GCN expect to mine the commonality and the specificity of multiple structures. The commonality describes the similarity characteristic of structures by Grassmann manifold metric, while the specificity narrates the difference characteristic of structures by spectral embedding. In the proposed method, the specificity is constructed based on the commonality. Therefore, we only execute the ablation experiment for preserving the commonality loss by deleting the specificity loss from the total loss. This experiment obtain the following performance, that is in Cora, in Citeseer and in PubMed. These results obviously are worse than the performance of the proposed SF-GCN and SPF-GCN, which can balance the commonality and specificity for mining the suited weight of multiple structures.

5 Conclusion

We have proposed structure fusion based on graph convolutional networks (SF-GCN) to address the multi-view data diversity and complexity for semi-supervised classification. SF-GCN can not only adapt spectral embedding to preserve the specificity of structure, but also model the relationship of the different structure to find the commonality of multiple structures by manifold metric. Furthermore, the proposed structure propagation fusion based graph convolutional networks(SPF-GCN) can combine structure fusion framework with structure propagation to generating the completer structure graph for improving the performance of semi-supervised classification. At last, the optimization learning of the SF-GCN can obtain both the suitable weight for the different structure and the merge embedding space. For evaluating the proposed SF-GCN and SPF-GCN, we carry out the comparison experiments about the baseline methods, the different multi-graph fusion methods, the state of the art methods and the the lost structure analysis on Cora,Citeseer and Pubmed datasets. Experiment results demonstrate SF-GCN and SPF-GCN get the promising results in semi-supervised classification.

6 Acknowledgements

The authors would like to thank the anonymous reviewers for their insightful comments that help improve the quality of this paper. This work was supported by NSFC (Program No.61771386,Program No.61671376 and Program No.61671374), Natural Science Basic Research Plan in Shaanxi Province of China (Program No.2017JZ020).

References

References