The performance of many machine learning algorithms depends on the way in which the distance or similarity between data points are measured [Kulis13]. For instance, -nearest neighbor classification [Cover67]
decides the class label of a data point from those of its neighbors, while Learning Vector Quantization (LVQ)[Kohonen1995]classifies each data point based on the closest prototype according to a given distance measure. Clustering algorithms, such as K-Means [Lloyd82], rely on a given distance or similarity function for input data. In order for these algorithms to perform accurately and efficiently, a metric that suits the given problem is necessary. The metric should capture the characteristics of the datasets expected, i.e., a pair of data points from the same class is closer than a pair of points from different classes.
The objective of metric learning is to learn a good metric from data since handcrafting good metrics for specific problems is generally difficult [DBLP:journals/corr/BelletHS13]. Metric learning has been an active research topic because of its applicability, i.e., any algorithm using a distance measure internally can benefit from its results [DBLP:journals/corr/BelletHS13, Kulis13]. In metric learning, a training set consists of sets of pairs; positive pairs and negative pairs
. Metrics are learned by optimizing a loss function that makes positive pairs closer while separating negative pairs. This enables us to improve the accuracy of various machine learning algorithms that depend on the metric. Figure1 illustrates metric learning applied to tree-structured data, which is the focus of this paper.
Discrete structures, in particular tree structures, play a key role in several research domains, for instance, XML documents on the web, parse trees for computer programs and natural language, and glycan structures in bioinformatics. Distance measures that exploit such structures have been extensively studied. The tree edit distance [Tai:1979:TCP:322139.322143] is one of the common choices for processing tree-structured data. Intuitively, the tree edit distance is measured by the number of operations needed to transform one input tree into another input tree. The edit operations consist of deletion, insertion, and replacement of the nodes in the trees. The tree edit distance is used in many research domains such as information extraction and bioinformatics [Jiang02, Reis04]. Computing the tree edit distance, however, is not scalable making it problematic for large-scale datasets. The current best algorithm runs in time where is the number of nodes of the input trees [Demaine:2009:ODA:1644015.1644017]. To overcome this issue, Augsten [Augsten:2008:PGD:1670243.1670247] proposed the -gram distance; it can be computed faster than the tree edit distance. Computing the -gram distance of trees with nodes can be done in time for fixed . Moreover, the -gram distance is known to approximate the fanout weighted tree edit distance, which is a weighted variant of the tree edit distance [Augsten:2008:PGD:1670243.1670247].
Most existing studies of metric learning for tree-structured data use the tree edit distance in learning, i.e., learned costs of the edit operations from examples [Bellet2012, Mokbel2015306, pmlr-v80-paassen18a]. The edit distance is, however, expensive to compute and hence not suitable for large-scale datasets.
In this paper, we propose a novel metric learning approach for tree-structured data that has the following features. First, we propose a differentiable parameterized distance based on -grams, the weighted -gram distance, to achieve practical metric learning even for large-scale tree-structured data sets. To make the distance function differentiable and always positive, we use the softplus function. It enables us to learn the distance function by gradient descent techniques and retain the triangle inequality during the learning process. Second, we also propose a way to learn the weighted -gram distance through Large Margin Nearest Neighbors (LMNN) [weinberger2009distance], which is one of the most widely-used metric learning schemes. Our proposed approach not only achieves results competitive with those of state-of-the-art edit distance-based methods [Bellet2012, pmlr-v80-paassen18a] in various classification problems, but also solves classification problems much faster than edit distance-based methods. Third, our method is interpretable. Our approach shares some aspects with kernel methods [DBLP:Gartner03], however, different from kernel methods, we do not implicitly cast data points into a high dimension space. Moreover, our weight parameter indicates which tree substructures are important for classifying input trees.
The remainder of this paper is structured as follows. We discuss related work on metric learning for structured data in Section 2. In Section 3, we revisit the basic concepts of tree-structured data, the -gram distance, and the scheme of distance metric learning as background. Section 4 describes our metric learning system in detail. Section 5 describes the experiments conducted on our methods, including accuracy and time comparisons. We conclude in Section 6.
2 Related Work
Our work is related to some machine learning research areas, especially metric learning and machine learning for structured data.
A pioneering study of metric learning learned Mahalanobis distance as an optimization problem [NIPS2002]. Large Margin Nearest Neighbors (LMNN) [weinberger2009distance] was proposed in order to learn the Mahalanobis distance from nearest neighbors. LMNN is often used because of its simplicity and efficiency [kedem12, Kulis13, shibin10, mmlmnn]
. LMNN is also a well-studied metric-learning scheme. For instance, the relation between LMNN and Support Vector Machine has been pointed out in a unified view[huyen12]. We apply the LMNN scheme to learn the distance between labeled ordered trees.
Almost all past studies on learning distances between trees employ edit distance, i.e., learning the costs of edit operations from examples. Early work in learning edit distance is the stochastic approach [mcc05, sebban06, ristad98]. Good Edit Similarity Learning (GESL) [Bellet2012] is a well-organized framework to learn the edit distance. GESL essentially optimizes -goodness [BalcanCOLT08, BalcanML08], which guarantees its generalization performance. Mokbel [Mokbel2015306] proposed a novel approach to learn simultaneously the edit costs for sequences and the Generalized Learning Vector Quantization (GLVQ) [Sato:1995:GLV:2998828.2998888] model. Paaßen [pmlr-v80-paassen18a] proposed to learn embeddings of the tree node labels while learning the GLVQ model. This approach is called Embedding Edit Distance Learning (BEDL) and succeeds in learning the edit distance flexibly from examples. These works blazed a trail in the field of metric learning for structured data. However, all of these methods incur high computation cost since they essentially compute the edit distance between trees. Our method uses the -gram distance rather than the edit distance in learning the parameters. This is a key difference between past studies and our approach.
In the field of machine learning for structured data, the kernel method is an active research topic [DBLP:Gartner03]. For instance, Kuboyama [kuboyama2007] proposed a kernel function that is computed from
-grams of trees. Tree kernels have been applied in many domains, such as Natural Language Processing and Bioinformatics[mos2006, yamanishi07]. Kernel methods, however, lack interpretability since they implicitly cast data points into a high dimension space. Moreover, the kernel matrix must be positive semidefinite, but this constraint does not suit some problems [Schleif2015].
In this section, we define the basic concepts of tree-structured data and the -gram distance following [Augsten:2008:PGD:1670243.1670247]. We also review the general concept of metric learning following [DBLP:journals/corr/BelletHS13, Kulis13] and a metric learning algorithm following [DBLP:journals/corr/BelletHS13, weinberger2009distance].
Tree is a directed, acyclic, connected, non-empty graph with node set and edge set . An edge
is an ordered pair, where are nodes, and is the parent of . A node can have at most one parent, and nodes with the same parent are siblings. Total order is defined on each group of sibling nodes. Two siblings are contiguous iff and they have no sibling such that . Node is the -th child of iff . The node with no parent is the root node, denoted by , and a node without children is a leaf. Each node has a label, , where is a finite alphabet. In what follows, such trees are called ordered labeled trees. We write, in recursive style, tree as where and is a list of subtrees whose root is a child of the root node.
Intuitively, the -grams of a tree are all subtrees with a specific shape. Parameters define -gram shape. To ensure that each node of the tree appears in at least one -gram, we extend the tree with dummy nodes.
Definition 1 (-Extended Tree).
Let be a tree, and and be two integers. The -extended tree, , is constructed from by (i) adding ancestors to the root node, (ii) inserting children before the first and after the last child of each non-leaf node, and (iii) adding children to each leaf of . All newly inserted nodes are dummy nodes that do not occur in and have a special label .
An example of a -extended tree is given in Figure 2-(b).
Definition 2 (-Gram).
Let be a tree, the corresponding extended tree, , . A subtree of is a -gram of iff (i) has leaf nodes and non-leaf nodes, (ii) all leaf nodes of are children of a single node, and (iii) the leaf nodes of are consecutive siblings in .
3.3 -gram distance
We can define a distance measure between trees based on -grams, which we call the -gram distance. Intuitively, the -gram distance is the number of -grams that are not shared by two trees. The -gram distance is computed as follows (1) extract all -grams of input trees, and (2) count the number of non-shared -grams. To save space, we use the tuple representation of -grams.
Definition 3 (Label Tuple).
Let be a -gram with nodes where is the -th node in the preorder transversal of . Tuple is called the label tuple of where is the label of node .
Definition 4 (-Gram Index).
Let be the multiset of all -grams of a tree , . The -gram index, , of is defined as the multiset of label tuples of all -grams of , i.e., .
Figure 2-(e) shows an example of the label tuple and the -gram index. The size of the -gram index is linear in the number of tree nodes [Augsten:2008:PGD:1670243.1670247]. Hereafter, if the distinction is clear from the context, we use the term -gram for both the -gram itself and its representation as a label tuple.
Definition 5 (-Gram Distance).
Let and be trees, , . The -gram distance, , between the trees is defined as the size of the symmetric difference between their indexes:
where is multiset union and is multiset intersection.
Consider the -gram distance between input trees and , namely . The -gram indexes for the input trees are , and , respectively. We have , and , then .
Augsten [Augsten:2008:PGD:1670243.1670247] showed that the -gram distance is pseudo-metric, that is, the distance can be zero for distinct trees in contrast to a normal metric. The computation time of the -gram distance is for fixed , where is the number of nodes of input trees. Moreover, the -gram distance approximates edit distance weighted by the number of each node, which is called fanout weighted tree edit distance [Augsten:2008:PGD:1670243.1670247].
3.4 Metric learning with nearest neighbors
The purpose of metric learning is to adapt the parameterized measure by given positive examples and negative examples. The metric learning problem is typically formulated as an optimization problem that has the following general form [DBLP:journals/corr/BelletHS13]:
where is a loss function that incurs a penalty when training constraints are violated, is a distance function parameterized by , is a set of positive pairs, and is a set of negative pairs. More precisely.
Large Margin Nearest Neighbors (LMNN) [weinberger2009distance] is one of the most widely-used distance learning schemes. LMNN locally defines the training pairs: for each data point, same class neighbors, the target neighbors, are paired as positive, while different class neighbors, impostors, are paired as negative. A schematic illustration of LMNN is given in Figure 3.
In this section, we introduce a novel approach of metric learning between trees based on -grams. The -gram distance is computed by simply counting the number of -grams not shared between input trees. Some -grams, however, can be important as discriminators for a given classification problem. We introduce a weight function for -grams and learn it from examples.
Our approach shares some aspects with edit distance-based approaches [Bellet2012, Mokbel2015306, pmlr-v80-paassen18a]. The edit distance is defined by the minimum number of edit operations needed to transform one tree into another. Edit distance-based methods primarily learn the cost of edit operations, i.e., they distinguish which edit operations are essential for a given classification problem. The edit operations are defined between two nodes, so they mostly learn the importance of the relations of nodes. On the other hand, our approach learns the importance of subtrees of the tree structure.
4.0.1 -gram distance with vector representations
Augsten [Augsten:2008:PGD:1670243.1670247] represents the -gram index as a multiset and compute the distance by operations between multisets. However, if the set of tree node labels is finite, the -gram index can be represented as a vector of fixed dimension. Vector representation allows us to compute the -gram distance efficiently.
(-Gram Vector) Let be the set of all -grams in dataset . For tree and its -gram index , -gram vector is a -dim count vector . Each dimension of corresponds to a -gram.
In order to compute the -gram distance from counting vectors, we introduce a function that computes the number of not shared -grams.
(-Gram Symmetric Difference Vector) Let and be input trees. We define the -gram symmetric difference vector between and as:
where is the element-wise minimum function, i.e., for -dimensional vector and ,
Let and be trees. The -gram distance equals the sum of the elements of the -gram symmetric difference vector, i.e.,
where is the all-one vector and is the -gram symmetric difference vector.
4.1 Computing the weighted -gram distance
In this section, we introduce the weighted -gram distance to perform metric learning based on -grams. The weight reflects the “importance” of each -gram, and allows the -gram distance to yield highly granular classification. In order to make the distance function differentiable and always positive, we use the softplus function. It enables us to learn the distance function by gradient descent techniques and retain the triangle inequality during the learning process.
4.1.1 Weighted -gram distance
Definition 8 (Softplus function).
The softplus function is defined as:
The softplus function always returns positive values. In order to prevent weight parameters from being negative, we apply the softplus function to the parameters. The softplus function is differentiable with respect to the input variables, and enables us to learn distance parameters by gradient descent techniques.
Definition 9 (Weighted -Gram Distance).
Let and be input trees. The weighted -gram distance, is defined as follows:
is a parameter that we learn. is the -gram symmetric difference vector between and .
The weighted -gram distance is pseudo-metric, i.e., satisfies the following conditions:
(iv) triangle inequality:
(i), (ii), and (iii) are clear by definition.
(iv) Let , , and be the multisets of extracted -grams of trees , , and , respectively. Let be a -gram with , and , , and be the numbers of occurrences of in , , and , respectively. Let , , be the contributions of to the distance , , and , respectively. The distance is computed as the sum of the contributions: . Here, , , and , where is a weight parameter for . Note that is positive since it is output by the softplus function. Therefore, we have . ∎
4.2 Learning the weighted -gram distance
The steps to perform metric learning are (1) create training pairs from a given dataset, (2) set an appropriate loss function for the training pairs, and (3) optimize the loss function with respect to the parameter of the target distance function.
We generate training pairs following the LMNN scheme. For every data point with class label in given dataset , we create positive and negative pairs as follows:
where is the “target” function that returns the set of -nearest neighbors with the same label , and is an “impostor” function that returns the set of data points with different labels and are closer than the farthest (-th) target. We now define the set of positive pairs as , and the set of negative pairs as , where is the number of training data points. We note that the training pairs are created with the default distance metric before the learning step.
Now we introduce the loss function based on the hinge loss:
where is a regularization coefficient, , and and are constants that represent margins. The first term is the L2 regularization term, the second term is a loss that makes positive pairs closer, and the third term is a loss that makes negative pairs farther apart. The hinge loss-based formulation is widely-used in margin-based methods such as soft-margin SVM [Chen04], LMNN [weinberger2009distance], and GESL [Bellet2012]. If a positive (resp. negative) pair satisfies a certain criterion, i.e., it is close (resp. far) enough, then it does not contribute to the loss function.
We minimize the loss function by gradient descent. In order to perform gradient descent, we need the gradient for input trees and . The gradient of weighted -gram distance function with respect to is computed as follows:
where is the -th element of and is the -th element of the -gram symmetric difference vector .
The learning procedure is summarized as follows: First, the -gram vector representations are computed for all input trees. We note that the computed -gram vector representations are used internally by the distance function in the following steps. Second, the set of positive pairs and the set of negative pairs are created by Eq. (10) and Eq. (11), respectively, from a given training dataset. Finally we minimize the loss function in Eq. (4.2) with respect to by gradient descent. In practice, we update impostors during the learning process to improve model performance.
In this section, we discuss our experiments on artificial and real-world datasets. The experiments are designed to show that the proposed approach not only achieves competitive results with state-of-the-art edit distance-based methods in various classification problems, but also solves the classification problems much faster than edit distance-based methods. All experiments were performed on a desktop computer with Intel(R) Xeon(R) E5-2680 v2 @ 2.80GHz CPU, 126GB RAM, and CentOS Linux 7.4.
We used our Python 3.6 and PyTorch[pytorch] implementation for LMNN learning with the weighted -gram distance. The tree edit distance algorithm is implemented following [paasen18-supp]. We adopted Adam optimizer [Adam] to optimize parameters in our training phase. We also used a Java implementation111https://pub.uni-bielefeld.de/data/2919994 for BEDL and GESL.
We evaluated our approach on one artificial dataset and several real-world datasets.
This dataset consists of two classes of strings with a length of exactly nine. The first class is drawn randomly from the set of strings expressed by in the regular expression. For the second class, we do the same from the set of strings expressed by in the regular expression. We can observe that substring never appears in the first class, and every string in the first class is generated in a “periodic way” unlike those in the second class. These facts are important for classifying these strings. We can regard a string as a tree (without branching nodes) in a natural way. We used 100 strings for each class.
We used two datasets from KEGG database [10.1093/glycob/cwj010] as used in [yamazaki15]. Glycans are defined as the third major class of biomolecules following DNA and proteins. Each monosaccharide in a glycan structure is connected to one or more monosaccharides, and we can regard a glycan structure as a labeled tree. CarbBank/CCSD [10.1093/glycob/2.6.505] gives the class labels for glycans. The trees have node labels and edge labels. We put edge labels into corresponding child nodes in the same way as [yamazaki15]. For instance, subtree is represented as . Every leaf node is represented by the special label . Each glycan structure is assigned to a blood component class among Erythrocyte, Leukemic, Serum, and Plasma. We created two binary classification problems (i) Erythrocyte/Leukemic (Glycan_EL) and (ii) Serum/Plasma (Glycan_SP). These problems have 138 trees and 267 trees, respectively. For evaluating multi-label classification, we also used the dataset that contains all instances (Glycan_MULTI) in accuracy comparison. Glycan_MULTI contains 405 trees and 4 class labels.
The words dataset used in [Bellet2012] contains English words and French words extracted from Wiktionary222http://en.wiktionary.org/wiki/Wiktionary:Frequency_lists. It consists of basic English/French words in order of frequency of use. We can regard a word as a tree (without branching nodes) in a natural way. Every leaf node is represented by the special label . We considered only words of length at least 4 to remove articles and prepositions. We used the top 500 words for each class.
Sentiment Treebank dataset contains movie reviews with their parse trees. The internal nodes have one of the 5-class labels from highly negative to highly positive. We set the class label as the sentence is positive or negative as a whole, i.e., the root node’s label. Every root node is replaced by a unique node whose label is . Every leaf node represents a specific word in the review sentence. We replaced each word with its POS tag for scaling. We randomly chose 100 trees for each class.
The datasets we used are summarized in Table 1.
|Dataset||trees||node labels||mean tree size||class|
5.2 Accuracy comparison
We evaluated several classification problems using different models with different distance measures. The problem setting has 3 parts: (i) the distance measure used by the classification model, (ii) the metric learning algorithm, and (iii) the distance-based classification model. We compared the following 5 settings: (E1) the -gram distance and the -nearest neighbor classifier, (E2) the weighted -gram distance with LMNN and the -nearest neighbor classifier (proposed), (E3) the edit distance with LMNN and the -nearest neighbor classifier, (E4) the edit distance with GESL and the MRGLVQ classifier [Bellet2012, NEBEL2015295], and (E5) the edit distance with BEDL and the MRGLVQ classifier [NEBEL2015295, pmlr-v80-paassen18a].
On each data set, we performed 5-fold cross-validation and compared the mean test error across the folds. In the setting (E1) and (E2), we set , as -gram size. We set for Strings and Glycan_MULTI dataset, and for others, where is the number of neighbors for the -nearest neighbor classifier and the number of “targets” of LMNN learning. As these parameters affect the classification results, we analyze their impact in the next subsection. Other parameters were set as follows: for margin parameters, for the initial learning rate of the Adam optimizer [Adam],
for the L2 regularization. We trained the model for 600 epochs. We updated impostors for LMNN every 50 epochs. In the LMNN learning step, we randomly chose 200 training data points if the number of input training data points is more than 200. In the setting (E3), we fixed the optimal edit operations that are computed for the first time of the learning algorithm, in the same way, as[DBLP:journals/corr/BelletHS13] and performed metric learning using the LMNN scheme with gradient descent with respect to the edit costs. The parameters for the settings (E4) and (E5) are selected by nested cross-validation following [pmlr-v80-paassen18a].
Figure 4 shows the results of our experiments. On each dataset, the -nearest neighbor classifier with the weighted -gram distance (E2) achieves a lower error rate than the -gram distance (E1) . Moreover, for all datasets, the -nearest neighbor classification with the weighted -gram distance (E2) achieves better results than that with the tree edit distance (E3). Also, our approach achieves competitive results with the state-of-the-art edit distance-based methods such as GESL and BEDL with the MRGLVQ classifier (E4) (E5).
5.2.1 Effects of parameters
Since the values of and determine the shape of -grams, they can affect the error rates of classification. We analyze the effect on our proposed method of changing the values of and (E2). In particular, we investigate the error rate of the 3-nearest neighbor classifiers with the conventional -gram distance and with the weighted -gram distance. The values and are chosen as . Figure 5 shows the transition in error rates with respect to and . For the Strings dataset, the case is worst among all, which follows our intuition that every string in the first class is composed of substrings of length three, and these substrings can be seen only in the case where . For the Glycan_SP dataset, the error rates gradually increase as both and become large. In both datasets, for all and values except for the case in the Strings dataset, the -nearest neighbor classifier with the weighted -gram distance outperformed that with the conventional -gram distance.
We also analyze the effect of the number of neighbors on the -nearest neighbor classifier and the LMNN metric learning scheme. In the training step, we created training pairs with targets for each tree. In the classification step, we performed the -nearest neighbor classification. Figure 6 shows the transition in error rate with respect to . Interestingly, in the Strings dataset, the case of achieves the highest accuracy among all. With regard to the number of neighbors for the -nearest neighbor classifier, Hastie [TESL] pointed out that the best value is situation dependent. We highlight the fact that our proposed distance outperformed the conventional -gram distance regardless of the values in both datasets except for the case in the Glycan_SP dataset.
In this subsection, we discuss the interpretability of our method. We can consider that -grams with substantial weights are important discriminators for classification problems since they essentially determine the classification results.
We exhibit some -grams receiving substantial weights and their occurrences in each class in Strings, Glycan_EL, and Words dataset as in Table 2. We can observe that high-weight -grams tend to appear many times in one of the classes, but not so much in the other classes. This fact implies that -grams with substantial weights are important features for classifying trees. In the Glycan_EL dataset, for example, appears only with class 1. It means that subtree which contains the leaf node, is a key feature for class 1. In the Words dataset, -gram appears on French words 14 times, but never on English words, which means French words in the dataset often have substring “eur”, but English words do not.
5.3 Running time comparison
In order to show the practical performance of our method, we compare the running times of classification algorithms based on the weighted -gram distance and the tree edit distance. In this experiment, we run the standard -nearest neighbor algorithm with two distinct distance functions, the weighted -gram distance and the tree edit distance. We measured the time for executing whole process: (i) encoding into count vectors, (ii) computing the distances between the test data and the training data to identify neighbors, (iii) and making an inference by the majority vote. The first encoding step is executed only for the weighted -gram distance. We first note that the theoretical running time of computing the -gram distance between trees is much faster than the fastest known tree edit distance algorithm, that of [Demaine:2009:ODA:1644015.1644017] which runs in time. Thus, the running time of the -nearest neighbor inference with the weighted -gram distance is , and that with the tree edit distance is , where is the number of training data points and is the number of test data points.
Table 3 shows the mean running time of 3-nearest neighbor inference for 5-fold cross-validation. In all datasets, the weighted -gram distance yields much shorter inference times than the tree edit distance as a distance function. Especially in the Sentiment dataset, whose mean tree size is the largest among all datasets, the -nearest neighbor classifier using our proposed method yields over 5000 times shorter inference time than that using the tree edit distance.
We note that edit distance-based methods such as GESL do not directly learn the edit cost of tree edit distance. They first compute an optimal sequence of edit operations for training data which is then fixed (held unchanged). Then, they learn an appropriate edit cost along with this sequence of edit operations, which makes their learning process much easier and faster than directly learning the edit cost. However, when making an inference for test data using the learned distance, the costly computation for the edit distance is still a major drawback.
|Strings||1.29 0.098 sec||23.7 0.09 sec|
|Glycan_SP||0.665 0.059 sec||28.4 2.21 sec|
|Glycan_EL||0.249 0.121 sec||182.9 8.86 sec|
|Words||41.3 0.31 sec||283.9 5.18 sec|
|Sentiment||1.79 0.105 sec||9013 770 sec|
This contribution has proposed a novel metric learning approach for tree-structured data that has the following features. The differentiable parameterized distance based on -grams (proposed herein), called the weighted -gram distance, achieves fast metric learning for tree-structured data. The time complexity of the weighted -gram distance is , while that of the tree edit distance is more than , where is the number of nodes of the input trees. Moreover, computation of the proposed distance involves only basic vector operations with the softplus function. It enables the distance function to be learned by the gradient descent techniques while retaining the triangle inequality during the learning process. Second, a way of learning the weighted -gram distance through LMNN, which is one of the most widely-used metric learning schemes, was also proposed. Moreover, the metric learning problem was formulated as an optimization problem based on the hinge loss-based formulation. Third, the results of our proposal are interpretable. Our weight parameter indicates which substructures in trees are important for classifying input trees.
We have empirically shown that for various classification problems our proposed method reduces error rates compared to the conventional -gram distance using the -nearest neighbor classification. Moreover, our approach achieved competitive results with the state-of-the-art edit distance-based methods such as GESL and BEDL. We have also shown that our approach solves classification problems much faster than the edit distance-based methods. In our experiments, the -nearest neighbor classifier using our proposed method solved the various classification problems at most 5000 times faster than that using the tree edit distance.
This work was partly supported by JSPS KAKENHI Grant Number 17K19973.