I Introduction
Graphbased representations are powerful tools to represent structure data that is described with pairwise relationships between components. The main challenge arising in analyzing the graphbased data is how to learn effective numeric features of the discrete graph structures. One way to achieve this is to employ graph kernels, that can characterize graph structures in a high dimensional space and thus better preserve the structure information [1].
Ia Related Works
In machine learning, a graph kernel is defined in terms of a similarity measure between graph structures. One of the most successful and widely used approach to defining kernels between a pair of graphs is to decompose the graphs into substructures and to compare/count pairs of specific isomorphic substructures
[1]. Specifically, any graph decomposition can be used to define a kernel, e.g., the graph kernel based on comparing all pairs of decomposed a) walks, b) paths and c) restricted subgraph or subtree structures. With this scenario, Kashima et al. [2] have proposed a Random Walk Kernel by comparing pairs of isomorphic random walks in a pair of graphs. Borgwardt et al. [3] have proposed a Shortest Path Kernel by counting the numbers of pairwise shortest paths having the same length in a pair of graphs. Costa and Grave [4] have defined a Neighborhood Subgraph Pairwise Distance Kernel by counting the number of pairwise isomorphic neighborhood subgraphs. Gaidon et al. [5] have developed a Subtree Kernel for comparing videos, by considering complex actions as decomposed spatiotemporal parts and building corresponding binary trees. The resulting kernel is computed by counting the number of isomorphic subtree patterns. Other alternative graph kernels that are specifically based on the Rconvolution framework also include a) the Segmentation Graph Kernel [6], b) the Pyramid Quantized WeisfeilerLehman Kernel [7], c) the Subgraph Matching Kernel [8], d) the Quantuminspired JensenShannon Kernel [9], etc.One major drawback arising in most existing Rconvolution kernels is that they neglect the relative locational information between substructures. Specifically, the Rconvolution kernels usually tend to add an unit value when a pair of similar substructures are identified. However, these kernels cannot identify whether these similar substructures are correctly aligned with the overall graph structures, i.e., they do not check if the topological arrangement of the substructures is globally correct. For an instance of a protein matching problem, we may have similar substructures from different parts of the overall structure. Rconvolution kernels will count these as being matching substructures, despite the fact that they are not correctly aligned. To overcome this drawback, Bai et al. [10, 11] have developed a family of novel vertexbased matching kernels by aligning depthbased representations of vertices [12]. All these matching kernels can be seen as aligned subgraph or subtree kernels that incorporate explicit structural correspondences, and thus address the drawback of neglecting relative locations between substructures arising in the Rconvolution kernels. Unfortunately, these matching kernels are not positive definite in general. This is because the alignment steps for these kernels are not transitive. In other words, if is the vertexalignment between graph and graph , and is the alignment between graph and graph , in general we cannot guarantee that the alignment between graph and graph is . On the other hand, Fröhlich et al. [13] have demonstrated that the transitive alignment step is necessary to guarantee the positive definiteness of the vertex/edge based matching kernels. Furthermore, either the Rconvolution kernels or the matching kernels only capture graph characteristics for each pair of graphs, and thus ignore the information over other graphs. As a summary, developing effective graph kernels still remains challenges.
IB Contributions
The aim of this work is to address the aforementioned shortcomings of existing graph kernels, by developing a new Hierarchical TransitiveAligned Kernel (HTAK) for unattributed graphs. The key innovation of the proposed kernel is that of transitively aligning vertices between pairs of graphs, through a family of hierarchical prototype representations. That is, given three vertices , and from three different sample graphs, if and are aligned, and and are aligned, the proposed kernel can guarantee that and are also aligned. As a result, the proposed kernel can theoretically guarantee the positive definiteness. Specifically, the main contributions of this work are threefold.
First, we propose a framework to compute a family of hierarchical prototype representations that encapsulate the dominant characteristics of the vectorial vertex representations over a set of graphs . This is achieved by hierarchically performing the means clustering method to identify a preassigned number of cluster centroid as the level prototype representations through the last level prototype representations, where the level representations correspond to the original vectorial vertex representations of all graphs. This in turn generate a family of hierarchical prototype representations, when we vary from to (i.e., ). We show that the new hierarchical prototype representations not only reflect the general structural information over all graphs, but also represent a reliable pyramid of vertices over all graphs at different levels.
Second, with the family of hierarchical prototype representations to hand, we develop a graph matching method by hierarchically aligning the vertices of each graph to its different level prototype representations. The resulting HTAK kernel is defined by counting the number of aligned vertex pairs. We show that the proposed kernel not only overcomes the shortcoming of ignoring correspondence information between isomorphic substructures that arises in most existing Rconvolution kernels, but also guarantees the transitivity between the correspondence information. As a result, the proposed kernel guarantees positive definite that is not available in existing alignment kernels [10, 11]. Furthermore, unlike most existing graph kernels, the proposed kernel incorporates the information of all graphs under comparisons into the kernel computation process, and thus encapsulates richer characteristics.
Third
, by transductively training the CSVM classifier associated with the proposed HTAK kernel, we empirically demonstrate the effectiveness of the new kernel appraoch. The proposed kernel can outperform stateoftheart graph kernels as well as graph neural network models on standard graph datasets in terms of the classification accuracy.
Ii Hierarchical Prototype Representations
In this section, we propose a framework to compute a family of hierarchical prototype representations that encapsulate the dominant characteristics over all vectorial vertex representations in a set of graphs . An instance of the proposed framework to compute the hierarchical prototype representations is shown in Fig.1. Specifically, let
denote the dimensional vectorial representations of vertices over all graphs in . We first adopt as the set of level prototype representations , i.e.,
(1) 
where all the indicate the current value of the parameter , , and each th element corresponds to . To compute the set of the higher level (i.e., ) prototype representations , we employ means [14] to localize centroid points over the set of the last lower level prototype representations , by minimizing the objective function
(2) 
where represents clusters over the set of level prototype representations , and is the mean of the prototype representations belonging to the th cluster . We employ the means as the set of level prototype representations , i.e.,
(3) 
where each th element corresponds to , and corresponds to the number of the layer prototype representations in .
Since the value of (i.e., ) is usually much lower than that of (i.e., ), the initialized set of level prototype representations correspond to the original vectorial representations of the vertices over all graphs in , and the set of level prototype representations are computed through the objective function of means (i.e., Eq.(2)) that can gradually minimize the innervertexcluster sum of squares over the set of the last level prototype representations . When we vary the parameter from to , this procedure naturally forms a family of hierarchical prototype representations as
(4) 
where each is the set of level prototype representations, and represents a reliable pyramid of the original vertex representations over all graphs at different levels (i.e., the prototype representations of different levels).
Note that, to compute the family of hierarchical prototype representations, in this work we employ the dimensional depthbased (DB) representations as the original dimensional vectorial vertex representations (i.e., ) to compute the different sets of level prototype representations . Certainly, computing the vertex representations is an open problem, on can also utilize any other approach to compute the initialized vectorial vertex representations [15, 16]. Specifically, in this work, the specified DB representation of each vertex is defined by measuring the entropies on a family of layer expansion subgraphs rooted at the vertex [12], where varies from to . Since each layer expansion subgraph completely contains the whole topological structure of the layer expansion subgraph, it is shown that such a dimensional DB representation encapsulates rich entropic content flow from each local vertex to the global graph structure, as a function of depth. Fig.2 exhibits the detailed process of computing the DB representation. Specifically, for each sample graph indicated by the black color and its th vertex indicated by the red color in Fig.2, we commence by computing the layer neighborhood set as
where is the shortest path between the th vertex and the th vertex . The resulting layer expansion subgraph is defined as the substructure preserving the vertices in as well as the edges between them from the original global graph , i.e., the substructures surrounded by the red broken line in Fig.2. Similarly, we also construct the layer and layer expansion subgraphs surrounded by the green and blue broken lines respectively in Fig.2. By parity of reasoning, we generate a family of layer expansion subgraphs rooted at (). Note that, if is greater than the longest shortest path rooted from to the remaining vertices of , the layer expansion subgraph is the global structure of . The resulting dimensional DB representation rooted at is
where is the Shannon entropy of a (sub)graph associated with the steady state random walk [11].
Iii Hierarchical TransitiveAligned Kernels
In this section, we propose a novel Hierarchical TransitiveAligned Kernel (HTAK) for unattributed graphs. We commence by introducing a new hierarchical transitive vertex matching method, through the family of hierarchical prototype representations. Moreover, we develop the HTAK kernel based on the new vertex matching method.
Iiia Hierarchical Transitive Vertex Matching Methods
In this subsection, we develop a new hierarchical transitive vertex matching method, by hierarchically aligning the vertices of each graph to each set of level prototype representations from the family of hierarchical prototype representations defined in Section II. For a set of graphs , we commence by computing the family of hierarchical prototype representations over the dimensional vectorial vertex representations of all graphs as . To establish the correspondence information between the graph vertices, we align the vectorial vertex representations of a sample graph to each set of level prototype representations . The alignment process is similar to that introduced in [11] for point matching in a pattern space. Specifically, we compute a
level affinity matrix in terms of the Euclidean distances between the two sets of points as
(5) 
where is a matrix, and each element represents the distance between the dimensional vectrial representation of and the th level prototype representation . For the affinity matrix , the rows index the vertices of , and the columns index the level prototype representations in . If is the smallest element in column , we say that the dimensional vectorial representation of is aligned to the th level prototype representation .
Similarly, for each other sample graph , we also align its dimensional vectorial representation of each vertex to each set of level prototype representations . We compute the element for the corresponding affinity matrix as
(6) 
Definition (Vertex matching between a pair of graphs): For the pair of graphs and of , if and are both the smallest elements in columns of and respectively, we say that the vertex of and the vertex of are aligned, i.e., there is an onetoone correspondence between and . More formally, let the level correspondence matrix record the state of alignments for , and
(7) 
Note that, indicates the set of vertices having the shortest path of length to , and the condition guarantees that the layer expansion subgraph rooted at does not surpass the global structure of (i.e., the dimensional DB representation of exists). Similarly, the level correspondence matrix records the state of alignments for , and satisfies
(8) 
Based on Eq.(7) and Eq.(8), the level correspondence matrix , that records the state of correspondence information between pairwise vertices of and , is defined as
(9) 
For the level correspondence matrix , the rows index the vertices of , and the columns index the the vertices of . If , there is an onetoone correspondence between the vertices and , i.e., we say that they are aligned or matched.
Note that, the vertex alignment information identified by is transitive, i.e., for three vertices , and , if and are aligned, and and are aligned, then and are also aligned. This is because identifies the vertex correspondences by evaluating whether the vertices are aligned to the same set of level prototype representations . Finally, by hierarchically aligning each graph to the set of different level prototype representations from the hierarchical prototype representations , we obtain a family of hierarchical transitive vertex correspondence matrices between and as
(10) 
Remarks: The procedure of computing the family of hierarchical correspondence matrices is completely unsupervised, since we do not utilize any class labels of the graphs in during the computational process.
IiiB The Hierarchical TransitiveAligned Kernel
We develop a new Hierarchical TransitiveAligned Kernel (HTAK) for graphs, based on the hierarchical transitive vertex correspondence matrices between graphs
Definition (The HTAK kernel): For the set of graphs , we commence by computing the dimensional DB representations of the vertices over all graphs in , as the level prototype representations . Based on and the definition in Section II, we generate a family of hierarchical prototype representations as
where represents the set of level prototype representations, and . For a pair of graphs and from , by aligning the vertices of and to the sets of different level prototype representations , we compute the family of hierarchical transitive vertex correspondence matrices as
between and based on Eq.(10). With to hand, the proposed HTAK kernel between and is defined as
(11) 
where is the greatest value of the parameter (i.e., varies from to ). As we have stated in Section II, the parameter indicates the dimension of the vectorial vertex representations, and we propose to employ the dimensional DB representations of vertices as the vectorial vertex representations [12]. Since the DB representations are computed based on the layer expansion subgraphs (), the greatest value of the parameter corresponds to that of the longest shortest path between vertices over all graphs in . Eq.(11) indicates that counts the number of aligned vertex pairs between and over all the level vertex correspondence matrices .
Lemma. The kernel is positive definite (pd).
Proof. Intuitively, the proposed HTAK kernel is pd, since it counts pairs of aligned vertices over the correspondence matrices and the correspondence information identified by the proposed kernel is transitive. More formally, for the graph , let be a
dimensional feature vector that counts the number of vertices aligned to the corresponding
level prototype representations , and(12) 
where the th element of counts the number of vertices (from ) that are all aligned to the th level prototype representation , and is defined by Eq.(7). Similarly, for the graph , we have the feature vector as
(13) 
Based on Eq.(12) and Eq.(13), the HTAK kernel defined by Eq.(11) can be rewritten as
(14) 
where is an inner product, i.e., it is a pd linear kernel. As a result, the kernel can be seen as a kernel that sums the linear kernels , and is thus pd.
IiiC Discussions of the Proposed Kernel
The new vertex alignment kernel has some important properties that are not available for some existing stateoftheart graph kernels.
First, unlike the existing alignment kernels [13, 11, 10, 17, 18] that can also identify correspondence information between vertices or edges, the aligned vertices identified by the proposed HTAK kernel are transitive. This is because, as we have stated in Section IIIA, the vertex alignment method employed in the proposed kernel can transitively align vertices between graphs. As a result, the proposed HTAK kernel not only overcomes the shortcoming of ignoring structural correspondences arising in most Rconvolution kernels, but also reflects more precise correspondence information than the existing alignment or matching kernels [13, 11, 10, 17, 19, 18].
Second, as Fröhlich et al. [13] have stated, the transitive alignment step is necessary to guarantee the positive definiteness of alignment kernels. Thus, the proposed HTAK kernel guarantees the positive definiteness that is not available to the aforementioned alignment kernels [13, 11, 10, 17, 19, 18].
Third, the computation of the proposed HTAK kernel for a pair of graphs incorporates the information over all graphs under comparisons. This is because is computed by hierarchically aligning the vertices of each graph to the different level prototype representations of the family of hierarchical prototype representations, that is hierarchically identified by means method over the dimensional vectorial vertex representations over all graphs in , i.e., is not only computed though each individual pair of graphs. By contrast, most existing graph kernels only capture graph characteristics for each pair of graphs [3, 20, 21, 22, 23, 4]. As a result, the proposed kernel may reflect richer graph characteristics.
Finally, note that, since the basics of the proposed kernel is based on dimensional DB representations of vertices than do not encapsulate any vertex or edge label information. The proposed kernel cannot accommodate the vertex or edge label information. However, we can still perform on attributed graphs by focusing on topological information without vertex/edge labels.
IiiD Computational Analysis
For the set of graphs each of which has vertices and edges, computing the proposed kernel requires time complexity , where corresponds to the set number of different level prototype representations from the family of hierarchical prototype representations, is the iteration number for the means method, and is the number of level prototype representations in . This is because computing the required dimensional DB representations of vertices (i.e., the level prototype representations ) relies on the shortest path computation on each graph, and thus requires time complexity . Computing the level prototype representations of relies on means method on the last layer prototype representations of . Since , the whole process requires time complexity . Calculating the kernel value between graphs relies on computing the level correspondence matrix in terms of each set of level prototype representations , and counting the number of vertices of each graph aligning to the prototype representations in . Since , the whole process requires time complexity . As a result, the whole time complexity of computing the proposed kernel over all graphs of requires time complexity . Note that, in this work, we employ the fastest means MATLAB implementation developed by Deng Cai [24], and the default number of is . Moreover, in this work, most graphs are sparse graphs (i.e., ) and is set as . As a result, the whole time complexity is approximately , indicating that our kernel can usually be computed in a polynomial time.
Datasets  BAR31  BSPHERE31  GEOD31  MUTAG  NCI1  CATH2  COLLAB  IMDBB  IMDBM 

Max # vertices  
Mean # vertices  
# graphs  
# classes  
Description 
Information of the graph based computer vision (CV), bioinformatics (Bio), and social network (SN) datasets
Iv Experiments
We evaluate the proposed HTAK kernels on nine benchmark graph datasets from computer vision, bioinformatics, and social networks. These datasets include: BAR31, BSPHERE31, GEOD31, MUTAG, NCI1, CATH2, COLLAB, IMDBB, and IMDBM. Here the BAR31, BSPHERE31 and GEOD31 datasets are all abstracted from the SHREC 3D Shape database, that consists of classes and 20 individuals per class [25]. Specifically, we establish the BAR31, BSPHERE31 and GEOD31 datasets through three mapping functions,i.e., a) ERG barycenter: distance from the center of mass/barycenter, b) ERG bsphere: distance from the center of the sphere that circumscribes the object, and c) ERG integral geodesic: the average of the geodesic distances to the all other points. On the other hand, other datasets are all available on the website http://graphkernels.cs.tudortmund. More details of these datasets are shown in Table.I.
Iva Experiments on Graph Classification
Experimental Setup:
We evaluate the performance of the proposed HTAK kernel in terms of graph classification problems on the aforementioned nine benchmark graph datasets. We also compare our kernel with a) five alternative stateoftheart graph kernels and b) four alternative stateoftheart deep learning methods for graphs. Specifically,
the graph kernels include 1) the aligned subtree kernel (ASK) [11], 2) the WeisfeilerLehman subtree kernel (WLSK) [26], 3) the shortest path graph kernel (SPGK) [3], 4) the graphlet count graph kernel [27] with graphlet of size (GCGK), and 5) the JensenTsallis qdifference kernel (JTQK) [28] with . On the other hand, the deep learning methods include1) the deep graph convolutional neural network (DGCNN)
[29], 2) the PATCHYSAN based convolutional neural network for graphs (PSGCNN) [30], 3) the diffusion convolutional neural network (DCNN) [31], and 4) the deep graphlet kernel (DGK) [32].For the WLSK kernel and the JTQK kernel, we set the highest dimension (i.e., the highest height of subtrees) of the WeisfeilerLehman isomorphism (for the WLSK kernel) and the treeindex method (for the JTQK kernel) as , based on the statements of the authors in [28, 26]. For the ASK kernel, we set the highest layer of the required DB representation as based on [11], to guarantee the best performance. For each kernel, we compute the kernel matrix on each graph dataset. We perform a
fold crossvalidation where the classification accuracy is computed using a CSupport Vector Machine (CSVM). In particular, we make use of the LIBSVM library
[33]. For each dataset and each kernel, we compute the optimal CSVMs parameters. We repeat the whole experiment 10 times and report the average classification accuracy (standard error) in Table II. Note that, for the proposed HTAK kernel we vary the parameter from to . Thus, for each dataset we compute kernel matrices for the HTAK kernel. The classification accuracy for each time is thus the average accuracy over the kernel matrices. Moreover, for the proposed HTAK kernel on each dataset, we set the parameter as , where varies from to and corresponds to the vertex number over all graphs in the dataset.For the alternative deep learning methods, we report the best results for the DGCNN, PSGCNN, DCNN, DGK models from their original papers. Moreover, note that the PSGCNN model can leverage additional edge features, most of the graph datasets and the alternative methods do not leverage edge features. Thus, we do not report the results associated with edge features in the evaluation. The classification accuracies and standard errors for each deep learning method are shown in Table.III. Finally, note that, as we have stated in Section IIIB, the computation of the HTAK kernel for a pair of graphs incorporates the information over all graphs under comparisons. Thus the proposed HTAK kernel can also incorporate the test graphs into the training process of CSVMs. In this sense, the proposed HTAK kernel can be seen as an instance of transductive learning [34] (i.e., we transductively train the CSVM), where all the graphs available (both from the training and test sets) are used to compute the graph centroid representations. However, note that we do not observe the class labels of the test graphs during the training. Finally, note that, some methods are not evaluated by the original authors on some datasets, thus we do not exhibit these results.
Datasets  BAR31  BSPHERE31  GEOD31  MUTAG  NCI1  CATH2  COLLAB  IMDBB  IMDBM 

HTAK  
ASK  
WLSK  
SPGK  
GCGK  
JTQK 
Datasets  MUTAG  NCI1  COLLAB  IMDBB  IMDBM 

HTAK  
DGCNN  
PSGCNN  
DCNN  
DGK 
Results and Discussions: In terms of the classification accuracy, we observe that our HTAK kernel can outperform the alternative graph kernels and deep learning methods on most datasets. For the alternative graph kernel methods, only the accuracies of the ASK kernel on the BAR31 and MUTAG datasets, and the accuracy of the SPGK kernel on the IMDBM dataset as well as that of the JTQK kernel on the NCI1 dataset are higher than the proposed HTAK kernel. On the other hand, for the alternative deep learning methods, only the PSGCNN model on the MUTAG dataset is higher than the proposed HTAK kernel.
In fact, the WLSK, ASK and JTQK kernels, as well as the alternative deep learning approaches can all accommodate the vertex label information, i.e., they can accommodate attributed graphs. By contrast, the proposed HTAK kernel is designed for unattributed graphs and can cannot associate with any vertex label information. On the other hand, only these deep learning methods can provide an endtoend learning framework for graph classification. By contrast, the proposed HTAK kernel associated with the CSVM can only provide a shallow learning framework. However, even under such disadvantageous situations, the proposed HTAK kernel can still outperform these methods on most datasets. This indicate that the proposed kernel can learn better topological characteristics of graphs than the remaining alternative methods, through the family of hierarchical prototype representations that represent a reliable pyramid of the original vertex representations over all graphs at different levels (i.e., the prototype representations of different levels).
The reasons for the effectiveness are fourfold. First, unlike the alternative WLSK, SPGK, GCGK and JTQK kernels that ignore the correspondences information between substructures, the proposed HTAK kernel can hierarchically identify the vertex correspondence information through the hierarchical prototype representations. Second, compared to the ASK kernel, the correspondence information identified by the HTAK kernel are transitive. By contrast, the ASK kernel cannot guarantee the transitivity. As a result, the HTAK kernel can capture more precise information for graphs than the ASK kernel. Third, unlike alternative kernels, only the proposed kernel incorporates the information of all graphs under comparisons into the kernel computation. The HTAK kernel thus reflects richer graph characteristics. Fourth, similar to the WLSK, SPGK, GCGK and JTQK kernels, all the alternative deep learning methods also do not associate with the structural correspondence information into the learning framework. Overall, the above observations demonstrate the effectiveness of the proposed HTAK kernel.
V Conclusions
In this paper, we develop a new Hierarchical TransitiveAligned kernel for graphs, that can transitively align the vertices between graphs through a family of hierarchical prototype graphs. Unlike most stateoftheart graph kernels, this kernel not only overcomes the shortcoming of ignoring correspondence information between graphs, but also guarantees the transitivity between the correspondence information. Experimental evaluations have demonstrated the effectiveness of the proposed new transitive aligned kernel. The proposed kernel can outperform stateoftheart graph kernels as well as the deep learning methods in terms of graph classifications.
Our future work is to further extend the proposed kernel for attributed graphs, so that the proposed kernel can accommodate the vertex label information into the computation, improving the performance the proposed kernel.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grant no. 61976235 and 61602535), the Open Projects Program of the National Laboratory of Pattern Recognition (NLPR), and the program for innovation research in Central University of Finance and Economics and the Youth Talent Development Support Program by Central University of Finance and Economics, No. QYP1908.
References
 [1] D. Haussler, “Convolution kernels on discrete structures,” in Technical Report UCSCRL9910, Santa Cruz, CA, USA, 1999.
 [2] H. Kashima, K. Tsuda, and A. Inokuchi, “Marginalized kernels between labeled graphs,” in Proceedings of ICML, 2003, pp. 321–328.
 [3] K. M. Borgwardt and H.P. Kriegel, “Shortestpath kernels on graphs,” in Proceedings of the IEEE International Conference on Data Mining, 2005, pp. 74–81.
 [4] F. Costa and K. D. Grave, “Fast neighborhood subgraph pairwise distance kernel,” in Proceedings of ICML, 2010, pp. 255–262.
 [5] A. Gaidon, Z. Harchaoui, and C. Schmid, “A time series kernel for action recognition,” in Proceedings of BMVC, 2011, pp. 1–11.
 [6] Z. Harchaoui and F. Bach, “Image classification with segmentation graph kernels,” in Proceedings of CVPR, 2007.
 [7] K. Gkirtzou and M. B. Blaschko, “The pyramid quantized weisfeilerlehman graph representation,” Neurocomputing, vol. 173, pp. 1495–1507, 2016.
 [8] N. Kriege and P. Mutzel, “Subgraph matching kernels for attributed graphs,” in Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26  July 1, 2012, 2012.
 [9] L. Bai, L. Rossi, L. Cui, J. Cheng, Y. Wang, and E. R. Hancock, “A quantuminspired similarity measure for the analysis of complete weighted graphs,” IEEE Transactions on Cybernetics, vol. 50, no. 3, pp. 1264 – 1277, 2020.
 [10] L. Bai, Z. Zhang, C. Wang, X. Bai, and E. R. Hancock, “A graph kernel based on the jensenshannon representation alignment,” in Proceedings of IJCAI, 2015, pp. 3322–3328.
 [11] L. Bai, L. Rossi, Z. Zhang, and E. R. Hancock, “An aligned subtree kernel for weighted graphs,” in Proceedings of ICML, 2015, pp. 30–39.
 [12] L. Bai and E. R. Hancock, “Depthbased complexity traces of graphs,” Pattern Recognition, vol. 47, no. 3, pp. 1172–1186, 2014.
 [13] H. Fröhlich, J. K. Wegner, F. Sieker, and A. Zell, “Optimal assignment kernels for attributed molecular graphs,” in Proceedings of ICML, 2005, pp. 225–232.
 [14] I. H. Witten, E. Frank, and M. A. Hall, Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, 2011.
 [15] R. C. Wilson, E. R. Hancock, and B. Luo, “Pattern vectors from algebraic graph theory,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 7, pp. 1112–1124, 2005.
 [16] X. Bai, E. R. Hancock, and R. C. Wilson, “Graph characteristics from the heat kernel trace,” Pattern Recognition, vol. 42, no. 11, pp. 2589–2606, 2009.
 [17] L. Bai, Z. Zhang, P. Ren, L. Rossi, and E. R. Hancock, “An edgebased matching kernel through discretetime quantum walks,” in Proceedings of ICIAP, 2015, pp. 27–38.
 [18] M. Neuhaus and H. Bunke, “Edit distancebased kernel functions for structural pattern classification,” Pattern Recognition, vol. 39, no. 10, pp. 1852–1863, 2006.
 [19] L. Bai, P. Ren, X. Bai, and E. R. Hancock, “A graph kernel from the depthbased representation,” in Proceedings of S+SSPR, 2014, pp. 1–11.
 [20] F. R. Bach, “Graph kernels between point clouds,” in Proceedings of ICML, 2008, pp. 25–32.
 [21] T. Gärtner, P. Flach, and S. Wrobel, “On graph kernels: hardness results and efficient alternatives,” in Proceedings of COLT, 2003, pp. 129–143.
 [22] F. Aziz, R. C. Wilson, and E. R. Hancock, “Backtrackless walks on a graph,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 6, pp. 977–989, 2013.
 [23] N. Shervashidze, P. Schweitzer, E. J. van Leeuwen, K. Mehlhorn, and K. M. Borgwardt, “Weisfeilerlehman graph kernels,” Journal of Machine Learning Research, vol. 1, pp. 1–48, 2010.
 [24] D. Cai, “The fastest matlab implementation of the means,” in Software available at http://www.zjucadcg.cn/dengcai/Data/Clustering.html, 2012.
 [25] S. Biasotti, S. Marini, M. Mortara, G. Patanè, M. Spagnuolo, and B. Falcidieno, “3d shape matching through topological structures,” in Proceedings of DGCI, 2003, pp. 194–203.
 [26] N. Shervashidze, P. Schweitzer, E. J. van Leeuwen, K. Mehlhorn, and K. M. Borgwardt, “Weisfeilerlehman graph kernels,” Journal of Machine Learning Research, vol. 12, pp. 2539–2561, 2011.
 [27] N. Shervashidze, S. Vishwanathan, T. Petri, K. Mehlhorn, and K. Borgwardt, “Efficient graphlet kernels for large graph comparison,” Journal of Machine Learning Research, vol. 5, pp. 488–495, 2009.
 [28] L. Bai, L. Rossi, H. Bunke, and E. R. Hancock, “Attributed graph kernels using the jensentsallis qdifferences,” in Proceedings of ECMLPKDD, 2014, pp. I:99–114.
 [29] M. Zhang, Z. Cui, M. Neumann, and Y. Chen, “An endtoend deep learning architecture for graph classification,” in Proceedings of AAAI, 2018.
 [30] M. Niepert, M. Ahmed, and K. Kutzkov, “Learning convolutional neural networks for graphs,” in Proceedings of ICML, 2016, pp. 2014–2023.
 [31] J. Atwood and D. Towsley, “Diffusionconvolutional neural networks,” in Proceedings of NIPS, 2016, pp. 1993–2001.
 [32] P. Yanardag and S. V. N. Vishwanathan, “Deep graph kernels,” in Proceedings of KDD, 2015, pp. 1365–1374.
 [33] C.C. Chang and C.J. Lin, “Libsvm: A library for support vector machines,” Software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm, 2011.

[34]
A. Gammerman, V. Vovk, and V. Vapnik, “Learning by transduction,” in
Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence
. Morgan Kaufmann Publishers Inc., 1998, pp. 148–155.
Comments
There are no comments yet.