. Specifically, to capture the “is-a” relationship between concept pairs, people often formulate the taxonomy into tree or directed acyclic graph (DAG) structure. Example applications could be found in e-commerce, where Amazon leverages product taxonomies for personalized recommendations and product navigation, and fine-grained named entity recognition where people rely on concept taxonomies (e.g., MeSH)(Lipscomb, 2000) to extract and label useful information from massive corpus.
However, the construction of taxonomies usually requires a substantial amount of human curation. Such process is time-consuming and labor-intensive. Thus, it is extremely hard to handle the large number of emerging new concepts in downstream tasks, which is fairly common nowadays with the rising tide of big data. To tackle this issue, recent work (Shen et al., 2020; Manzoor et al., 2020; Yu et al., 2020; Zhang et al., 2021; Zeng et al., 2021; Song et al., 2021) turns to the tasks of the automatic expansion and completion of the existing taxonomy.
Previous taxonomy construction methods (Gupta et al., 2017; Mao et al., 2018; Zhang et al., 2018b; Liu et al., 2012; Wang et al., 2013; Hearst, 1992) construct taxonomies from scratch, and highly rely on annotated hypernym pairs, which are expensive and sometimes inaccessible in practice. Therefore, automatic taxonomy expansion based on existing taxonomies is in great need and has gained increasing attention.
The recent studies on taxonomy expansion and completion achieved noticeable progress, which mainly contribute from two directions. (1) extract hierarchical information from the existing taxonomy, and utilize different ways to model the structural information in the existing taxonomy, such as local egonet (Shen et al., 2020), parent-query-child triplet (Zhang et al., 2021), and mini-paths (Yu et al., 2020). (2) leverage the supporting corpus to generate the embeddings of concepts directly. They either only used implicit relational semantics (Manzoor et al., 2020), or only relied on corpus to construct limited seed-guided taxonomy (Huang et al., 2020). Very recently, (Zeng et al., 2021)
combines the representations from semantic sentences and local-subgraph encoding as the features of concepts. However, they only utilized light-weight multi-layer perceptron (MLP) for matching, which suffers from the limited representation power. In this paper, we follow(Zhang et al., 2021) to focus on taxonomy completion, which aims to predict the most likely query, hypernym, hyponym triplet for a given query concept. For example, in Figure 1, when considering the query “Integrated Circuit”, we aim to find its true parent “Hardware” and child “GPU”.
To effectively leverage both semantic and structural information for better taxonomy completion performance, in this work, we propose TaxoEnrich, which aims to learn better representations for each candidate position and render new state-of-the-art taxonomy completion performance. Specifically, TaxoEnrich consists of four carefully-designed components. First, we propose a taxonomy-contextualized embedding generation process based on pseudo sentences extracted from existing taxonomy. The two types of pseudo sentences, i.e., ancestral and descendant pseudo sentences, capture taxonomic relations from two directions respectively. Then, the powerful pretrained language models are utilized to produce the taxonomy-contextualized embedding based on the extracted sentences. Secondly, to encode the structural information of the existing taxonomy in both vertical and horizontal views, we develop two novel encoders: a sequential feature encoder based on the pseudo sentences and a query-aware sibling encoder base on the importance of candidate siblings to the matching task. The former aims to learn a taxonomy-aware candidate position representations, while the latter further augments the position representations with adaptively aggregated candidate siblings information. Finally, we develop an effective query-position matching model by extending previous work (Zhang et al., 2021) to incorporate our novel candidate position representations. Specifically, it takes into consideration both fine-grained (query to candidate parent) and coarse-grained (query to candidate position) relatedness for better taxonomy completion performance.
We conducted extensive experiments on four real-world taxonomies from different domains to test the performance of TaxoEnrich framework. Further more, we designed two variations of the framework, TaxoEnrich and TaxoEnrich-S to conduct ablation experiments to explore the utilization of different information under different datasets, along with studies to examine the effectiveness of each sub-module of the framework. Our results show that TaxoEnrich can more accurately capture the correct positions of query nodes than previous methods and achieve state-of-the-art performance on both taxonomy completion and expansion tasks.
To summarize, our major contributions include:
We propose an effective embedding generation approach which can be applied generally for learning contextualized embedding for each concept node in a given taxonomy.
We introduce the sequential feature encoders to capture vertical structural information of candidate positions in the taxonomy.
We design an effective query-aware sibling encoder to incorporate horizontal structural information in the taxonomy.
Extensive experiments demonstrate that our developed framework enhances the performance on both taxonomy completion and expansion task by a large margin over the previous works.
2. Problem Definition
In this section, we formally define the taxonomy completion task studied in the paper.
Taxonomy. Follow (Shen et al., 2020), we formulate a taxonomy as a directed acyclic graph where each node represents a concept and each directed edge represents a taxonomic relationship between two concepts.
Taxonomy Completion. The taxonomy completion task (Zhang et al., 2021) is defined as following: given an existing taxonomy and a set of new concepts , assuming that each concept in is in the same semantic domain as , we aim to automatically find the most possible hypernym, hyponym pairs for each new concept to complete the taxonomy. The output is where and is the updated edges set after inserting new concepts.
Candidate Positions. In the taxonomy completion task, we define a valid candidate position as a pair of concept nodes in the existing taxonomy where is a parent of . Note that one of and could be a pseudo placeholder node in case that the concepts needed to be inserted as root or leaf nodes.
The goal of taxonomy completion is to enrich the existing taxonomy by inserting new concepts. These new concepts are generally extracted from text corpus using entity extraction tools. Since this process is not the focus of the paper, we assume that the set of new concepts is given, as well as their embedding, which is denoted by for new concept .
3. The TaxoEnrich Framework
In this section, we introduce the TaxoEnrich framework in details. We first introduce the taxonomy-contextualized embedding generation for each concept node in the existing taxonomy. Then, given the extracted taxonomy-contextualized embedding, we develop two encoders to learn the representation of candidate positions from vertical and horizontal views of the taxonomy respectively. Finally, we propose a query-to-position matching model which leverages various structural information and takes into consideration both fine- and coarse-grained relatedness to boost the matching performance. The overall framework of TaxoEnrich is in Figure 2.
3.1. Taxonomy-Contextualized Embedding
Here, we describe the generation process of taxonomy-contextualized embedding for each node in the taxonomy. Different from prior work which leverages static word embedding, such as Word2Vec and FastText (Shen et al., 2020; Zhang et al., 2021), or contextualized embedding solely based on an additional text corpus (Yu et al., 2020), we generate taxonomy-contextualized embedding based on taxonomy structure and concept surface name. The reason is that neighboring concepts in taxonomy are likely to share similar semantic meaning and it is hard to distinguish them based on predefined general-purpose embedding. With similar spirit, (Zeng et al., 2021) also leverage pretrained language models to produce contextualized embedding based on limited number of taxonomy neighbor and the surface names is implicitly utilized in fine-tuning the pretrained language models. In contrast, we aim to fuse the information of all the descendant/ancestral concepts of the given concept, without fine-tuning a huge pretrained language model. Specifically, given a concept node, we build pseudo sentences based on Hearst patterns (Roller et al., 2018) to represent both the positional and semantic information. We separately consider the descendant/ancestral information by constructing descendant/ancestral pseudo sentences respectively as shown in Figure 3.
Formally, given a taxonomy and the candidate concept node , we extract the following two types of pseudo sentences that represent taxonomic relationships:
Ancestral Pseudo Sentences: We first extract paths connecting root and the candidate node without duplicate nodes. In the extracted path , is the root node and . We denote the -th extracted path for node as . Along each path, we generate the ancestral pseudo sentence as below:
” is a superclass of ”
or ” is an ascendant of ”
All such words like ”superclass” or ”ascendant” that can represent the hierarchical relationship between the path and candidate nodes can be used for sentence generation. We denote the collection of such ancestral paths as and the generated sentences as
Descendant Pseudo Sentences: Similarly, the paths without duplicated nodes starting from the candidate node to leaf nodes are extracted, denoted as where and is a leaf node. In this case, along each path, we generate the sentence as below
” is a subclass of ”
or ” is a descendant of ”
We denote the collection of such descendants paths as and the generated sentences as
Then, the set of all the generated pseudo sentences is . As visualized in Figure 3, aiming to generate embeddings of the concept node “Disk”, we only consider the ancestral and descendant paths, such as “Electronic Devices, Smart Phone is a superclass of Disk”. Note that if the candidate node is leaf node or root, we will only consider one side pseudo sentences. Given the generated pseudo sentences, we apply a pretrained language models to generate taxonomy-contextualized embedding for each node. Specifically, we feed the pseudo sentences to the pretrained language model and collect the last hidden state representations of the concept node , which is averaged as the final taxonomy-contextualized embedding . In our preliminary experiments, we found that SciBERT is better than other models in representing concept representations. Hence, in this paper, we choose SciBERT (Beltagy et al., 2019) following (Zeng et al., 2021). And the comparison between different pre-trained language models in terms of performance are discussed in References. Note that the taxonomy-contextualized embeddings are pre-computed and fixed for the following modules, which means we do not fine-tune the large pretrained language model.
3.2. Sequential Feature Encoder
Given the taxonomy-contextualized embedding for existing node , we develop a learnable sequential feature encoder to encode the structural information of candidate positions in a vertical view of taxonomy. For a candidate position consisting of candidate parent and child , we produce parent embedding and child embedding respectively. Specifically, for candidate parent and its corresponding ancestral paths , we randomly sample a path from and apply a LSTM sequential encoder which inputs the sampled pseudo sentence and the taxonomy-contextualized embedding. Then, we concatenate the final hidden state of the LSTM encoder and the taxonomy-contextualized embedding as parent embedding. Formally,
where is the learnable parameters of the LSTM encoder and represents the concatenation operation. Similarly, we generate the child embedding based on taxonomy-contextualized embedding and descendant paths from :
where, represents the learnable parameters of the LSTM encoder. The output will be used as the embedding for candidate position nodes.
Through this sequential feature encoder, we are able to fuse the structural information of candidate position in a vertical view. This allows the candidate position representations to be aware of the “depth” information of the candidate position, i.e., whether the candidate position is in the top-level of taxonomy close to the root or in the bottom-level close to leave.
3.3. Query-Aware Siblings Encoder
The aforementioned sequential feature encoder incorporates the taxonomy structural information in a vertical view, however, it is of great importance to also encode the horizontal local information of the candidate position. Thus, we develop another encoder to incorporate the structural information in a horizontal view. Specifically, in addition to the candidate parent and child, we consider the candidate siblings, i.e., the children of candidate parent, of the query node.
However, incorporating candidate siblings is challenging than candidate parent and child. The reasons are twofold. First, compared to the candidate parent and child which compose the candidate position, candidate siblings could introduce noisy information and thus lead to sub-optimal results. For example, for the top-level of taxonomy, the candidate siblings could have quite diverse semantic meanings, which hinder good matching between candidate position and query node. Secondly, since some candidate parent could have substantial amount of children (candidate siblings), it is infeasible to incorporate all the candidate siblings without strategic selection.
To tackle these issues, we develop a query-aware siblings encoder, which adaptively selects part of the candidate siblings. Specifically, we measure the relatedness of a given query embedding and each candidate sibling condition on the representation of candidate parent-child pair. Such relatedness is in turn used to aggregate the sibling information into a single siblings embedding. Mathematically, given candidate position with corresponding embedding and the set of candidate siblings , we use a learnable bilinear matrix to calculate the relatedness of query and candidate sibling as
Then the relatedness score is normalized over the set of candidate siblings by a softmax function:
The normalized score captures the importance of candidate sibling for the specific query-position matching. In other words, it highlights the siblings relevant to the query condition on the candidate position while lessen the effect of irrelevant siblings. Finally, the sibling embeddings are aggregate based on the normalized score as
where . During experiments, we found that such a query-aware siblings encoder renders good performance when only a subset of siblings are considered, which alleviates the heavy burden of aggregate over the potentially large amount of candidate siblings.
3.4. Query-Position Matching Model
Finally, given the representation of candidate parent , child and siblings as well as the given query embedding , we are ready to present our final matching module, which outputs the matching score of query and candidate position for taxonomy completion task. In particular, we seek to learn a matching model that outputs the desired relatedness score:
where is a parametrized scoring function.
The previous study (Zhang et al., 2021) showed that the simple matching model that learns one-to-one relatedness between the query node and the position pair ignores fine-grained relatedness between query and position component, i.e., the relatedness between and . Therefore, inspired by (Zhang et al., 2021), we propose a new matching model which incorporates the additional siblings embedding and learn more precise matching based on both fine-grained (query to candidate parent/child/siblings) and coarse-grained relatedness (query to position).
To learn both the fine-grained and coarse-grained relatedness between the query node and the candidate positions, we construct multiple auxiliary scorers that separately focus on the relationship between the query node and the candidate parent, the candidate child, the candidate siblings and the candidate position, respectively. We adopt the Neural Tensor Network (NTN)(Socher et al., 2013)
as the base models. Given vectors, an NTN can be defined as
where is a tanh function and , , and are learnable parameters. Note that
is a hyperparameter in NTN.
Then our multiple scorer can be defined as
We omit the learnable parameters inside each for notation convenience. In this formulation, aim to learn the fine-grained relatedness for separately by predicting whether the , , and is the reasonable parent, child, and siblings, respectively. Differently, is designed for coarse-grained relatedness between the query node and the candidate position. Eventually we construct a primal scorer which incorporates the all the auxiliary scorers.
We omit the input of each function for simplicity. In this case, even though and share the same supervision signal, the concatenation of internal representations of other auxiliary scorers in will allow it to capture accurate matching information based on when cannot learn correct coarse-grained relatedness.
3.4.1. Learning Objectives
For each auxiliary scorers, since the model is trained for binary classification task to calculate the relatedness between the query node and the target objective, we adopt the binary cross-entropy loss. Thus, the learning objective for each scorer can be formulated as
where is the dataset formulated by the self-supervised generation following similar methods proposed in (Shen et al., 2020; Zhang et al., 2021), and is the generated data pair in the dataset, and represents each scorer. In this case, the final learning objective that focuses on the primal task will naturally be defined as
|Dataset||# of Sentences|
|Bilinear||3360.343 6.126||0.026 0.000||0.000 0.000||0.003 0.000||0.006 0.000||0.001 0.000||0.002 0.000||0.003 0.000|
|TaxoExpan||823.075 114.638||0.193 0.007||0.030 0.002||0.095 0.004||0.137 0.007||0.132 0.010||0.083 0.003||0.059 0.003|
|ARBORIST**||1142.335 19.249||0.133 0.004||0.008 0.001||0.044 0.003||0.075 0.003||0.037 0.004||0.038 0.003||0.033 0.001|
|TMN||436.319 13.128||0.243 0.005||0.056 0.001||0.145 0.004||0.189 0.005||0.245 0.006||0.126 0.003||0.082 0.002|
|GenTaxo||13213.731 662.688||0.239 0.006||0.082 0.002||0.185 0.008||0.218 0.008||0.254 0.010||0.131 0.007||0.085 0.003|
|TaxoEnrich-S||73.680 1.346||0.545 0.002||0.154 0.006||0.396 0.003||0.534 0.002||0.251 0.016||0.129 0.002||0.087 0.001|
|TaxoEnrich||87.798 1.512||0.578 0.001||0.162 0.004||0.434 0.005||0.574 0.003||0.274 0.017||0.141 0.002||0.093 0.002|
|Bilinear||2118.204 4.152||0.032 0.000||0.000 0.000||0.001 0.000||0.003 0.000||0.000 0.000||0.000 0.000||0.000 0.000|
|TaxoExpan||345.679 24.306||0.441 0.005||0.122 0.003||0.287 0.007||0.364 0.009||0.249 0.007||0.117 0.003||0.074 0.002|
|ARBORIST**||547.723 20.165||0.344 0.012||0.062 0.009||0.185 0.011||0.256 0.013||0.126 0.018||0.076 0.004||0.052 0.003|
|TMN||159.550 5.290||0.531 0.007||0.175 0.002||0.369 0.005||0.446 0.009||0.358 0.004||0.150 0.002||0.091 0.002|
|GenTaxo||7482.516 2600.713||0.464 0.022||0.183 0.116||0.402 0.066||0.440 0.039||0.376 0.119||0.164 0.027||0.090 0.008|
|TaxoEnrich-S||149.660 3.430||0.561 0.005||0.221 0.010||0.420 0.007||0.480 0.007||0.365 0.020||0.178 0.003||0.117 0.001|
|TaxoEnrich||122.247 3.241||0.583 0.010||0.234 0.009||0.424 0.013||0.510 0.018||0.374 0.021||0.186 0.002||0.124 0.002|
|Bilinear||3290.858 14.668||0.196 0.000||0.013 0.000||0.063 0.000||0.109 0.000||0.023 0.001||0.022 0.000||0.019 0.000|
|TaxoExpan||970.858 50.995||0.390 0.004||0.066 0.002||0.186 0.003||0.269 0.007||0.114 0.003||0.065 0.001||0.047 0.001|
|ARBORIST**||2993.341 114.749||0.217 0.005||0.021 0.001||0.073 0.002||0.125 0.002||0.036 0.021||0.025 0.001||0.022 0.000|
|TMN||827.371 24.310||0.367 0.006||0.054 0.002||0.169 0.002||0.256 0.004||0.095 0.002||0.058 0.000||0.044 0.001|
|GenTaxo||57871.589 89.230||0.286 0.162||0.025 0.007||0.169 0.049||0.268 0.118||0.109 0.013||0.024 0.007||0.029 0.001|
|TaxoEnrich-S||230.576 6.472||0.426 0.018||0.125 0.019||0.212 0.012||0.321 0.018||0.216 0.024||0.108 0.004||0.078 0.003|
|TaxoEnrich||227.839 12.247||0.442 0.018||0.123 0.012||0.248 0.011||0.351 0.019||0.226 0.023||0.115 0.002||0.098 0.002|
|Bilinear||1866.736 5.020||0.174 0.000||0.012 0.001||0.054 0.000||0.095 0.000||0.017 0.001||0.016 0.000||0.014 0.000|
|TaxoExpan||853.308 18.302||0.325 0.007||0.069 0.001||0.169 0.003||0.228 0.008||0.104 0.002||0.051 0.001||0.034 0.001|
|ARBORIST**||2993.341 4.950||0.206 0.011||0.016 0.004||0.073 0.011||0.016 0.011||0.024 0.006||0.022 0.003||0.018 0.002|
|TMN||832.541 29.589||0.354 0.010||0.081 0.007||0.194 0.013||0.259 0.014||0.121 0.011||0.059 0.004||0.039 0.002|
|GenTaxo||2765.745 262.631||0.428 0.117||0.118 0.069||0.208 0.104||0.239 0.112||0.235 0.152||0.122 0.038||0.066 0.016|
|TaxoEnrich-S||304.565 3.628||0.442 0.004||0.128 0.003||0.256 0.012||0.350 0.009||0.242 0.005||0.121 0.004||0.074 0.001|
|TaxoEnrich||320.064 14.153||0.452 0.005||0.143 0.002||0.252 0.014||0.347 0.006||0.276 0.004||0.126 0.001||0.081 0.002|
4.1. Experiment Setup
Dataset. We evaluate the performance of TaxoEnrich framework on the following four real-world large-scale datasets. The statistics of each dataset are listed in Table 1.
Microsoft Academic Graph (MAG): This public Field-of-Study (FoS) taxonomy contains over 660 thousand scientific concepts and more than 700 thousand taxonomic relations. We follow the data preprocessing in (Shen et al., 2020) to only select partial taxonomies under the computer science (MAG-CS) and psychology (MAG-PSY) domain (Sinha et al., 2015).
WordNet: We collect the concepts and taxonomic relations from verbs and nouns sub-taxonomies based on WordNet 3.0 (WordNet-Noun, WordNet-Verb). These two sub-fields are the only parts that have fully-developed taxonomies in WordNet. In practice, due to the scarcity in the dataset, i.e. there are many disconnected components in the both taxonomies, we added a pseudo root named “Noun” and “Verb” and connect this root to the head of each connected components in the taxonomies for generate a more complete taxonomic structure.
Follow the dataset splitting settings used in (Shen et al., 2020; Zhang et al., 2021), we sample 1000 nodes for validation and test respectively in each dataset. Then we use the remaining nodes to construct the initial taxonomy.
4.2. Compared Methods
To fully understand the performance of our framework, we compare our model with the following methods.
Bilinear Model (Sutskever et al., 2009) incorporates the interaction of two concept embeddings, i.e., parent, child entity pair embeddings, through a simple bilinear form. This method serves as a baseline result to check the comparable performance of each framework.
ARBORIST (Manzoor et al., 2020) is a state-of-the-art taxonomy expansion framework which aims for taxonomies with heterogeneous edge semantics and optimizes a large margin ranking loss with a dynamic margin function.
TMN (Zhang et al., 2021) is a state-of-the-art taxonomy completion framework and also the first framework that proposed the completion task, and computed the matching score between the query concept and hypernym, hyponym pairs.
GenTaxo (Zeng et al., 2021) is a state-of-the-art taxonomy completion framework using both sentence-based and subgraph-based encodings of the nodes to perform the matching. Since part of the framework concentrates on concept name generation tasks, which is not the focus of this paper, we adopt the GenTaxo++ assuming the newly added nodes are given. 111Note that since the implementation code of GenTaxo (Zeng et al., 2021) is not released, we implemented the framework based on the description in the paper.
We also include two variants of TaxoEnrich in experiments for ablation study:
TaxoEnrich-S: In this version, we exclude the sibling information from the matching model, since in sparse taxonomies, such as WordNet, the siblings cannot represent the precise candidate positions, and might still introduce noisy information when computing the relateness between query node and candidate position.
TaxoEnrich: In this version, we adopt the full framework of TaxoEnrich as described above. We will examine the difference between two variants through further experiments.
4.3. Evaluation Metrics
Since the result from the model’s output is a ranking list of candidate positions for each query node, following the guidelines in (Shen et al., 2020; Zhang et al., 2021), we utilize the following rank-based metrics to evaluate the performance our framework and the comparison methods.
Mean Rank (MR). This metric measures the average rank position of a query concept’s true position among all candidate positions. For queries with multiple correct positions, we first calculate the rank position of each individual triplet and then take the average of all rank positions. Smaller value in this metric indicates the better performance of the model.
Mean Reciprocal Rank (MRR). We follow (Ying et al., 2018) to compute the reciprocal rank of a query concept’s true positions using a scaled MRR. In the evaluation, we scale the MRR by 10 to enlarge the difference between different models clearly.
Recall@ measures the number of query concepts’ true positions ranked in the top , divided by the total number of true positions of all query concepts.
Precision@ measures the number of query concepts’ true positions ranked in the top , divided by the total number of queries times .
For all the evaluation metrics listed above except for MR, the larger value indicates better performance of the model. During the evaluation, since MR and MRR are the only metrics that concentrates on the performance of all predictions in the taxonomy in general, we consider them as the most important metric for evaluation.
5. Experimental Results
In this section we will first discuss the experiment results on both taxonomy completion and expansion tasks which demonstrated the superiority of our TaxoEnrich method. Then to further understand the contributions from each of our model design, we conduct ablation studies. Finally we performed case studies to further illustrate the effectiveness of TaxoEnrich.
5.1. Performance on Taxonomy Completion
The overall performance of compared methods and the proposed framework is indicated in Table 2. First, we can see that the performance of the framework tends to become better when the complexity of local structure increases, from the one-to-one matching in TaxoExpan to triplet in TMN , and the neighboring paths and subgraph encoding in GenTaxo. Second, we can generally observe the power of pre-trained language models in the representations of concept nodes in the taxonomy. The frameworks including GenTaxo and TaxoEnrich utilizing language models have generally better performance in the precision@ and recall@ metrics.
In terms of MR , we can see that TaxoEnrich obtained most performance improvement in MAG-CS dataset since the computer science taxonomy has the most complete taxonomic structure compared with other datasets, allowing for more accurate taxonomy-contextualized embeddings generated by Section 3.1. And in WordNet datasets the performance in MR metric is improved by a relatively large margin while all frameworks do not perform as well as in MAG datasets. In terms of precision@ and recall@, our method also shows noticeable improvement over baseline models. In the previous methods, the static embedding method failed to capture the similar semantic meaning between different concept nodes. And we can see GenTaxo renders competing performance on these two metrics, but tends to be unstable and perform not well in ranking metrics. The primary reason for this observation is that while the language-based embeddings can provide pretty accurate positional information, its light-weight MLP matching module prevents it from capturing useful relatedness between query node and candidate position.
For two WordNet datasets, we can see that other frameworks are inclined to have similarly poor performance due to the scarcity of taxonomies. The non-connectivity causes the matching module difficult to extract the relations during the training. Therefore, the manually added pseudo root for sentence generation would maintain the taxonomic structure information in the representations of concept node and candidate positions, allowing the framework to capture both the structure and semantic information of each node.
In the comparison between TaxoEnrich-S and TaxoEnrich, we can observe that, in two MAG datasets, the incorporation of sibling information in TaxoEnrich would have better performance. However, it will also cause a drop in MR metric except for MAG-PSY and WordNet-Noun datasets since the randomly extracted siblings will still introduce noisy information in the matching module. In the WordNet datasets, the performance of two methods are very similar. This is because with the scarcity in the taxonomies, i.e., the lack of siblings, will mislead the model to incorporate inaccurate sibling information, causing a clear difference for MR metric. On the other hand, the precision in TaxoEnrich is still better than TaxoEnrich-S, which illustrates the effectiveness of siblings in representing the positional information.
5.2. Performance on Taxonomy Expansion
Taxonomy expansion is a special case of the taxonomy completion task where new concepts are all leaf nodes. In this case, we would like to further explore the performance of TaxoEnrich on taxonomy expansion task on MAG-CS and WordNet-Verb dataset, compared with TaxoExpan, TMN, and GenTaxo framework. As indicated in Table 3, we can also observe that TaxoEnrich outperforms other methods by a large margin in all metrics.
|TaxoExpan||197.776 16.038||0.562 0.023||0.100 0.011||0.163 0.018|
|TMN||118.963 6.307||0.689 0.005||0.174 0.002||0.283 0.004|
|GenTaxo||140.262 40.398||0.634 0.044||0.149 0.020||0.294 0.096|
|TaxoEnrich-S||67.947 1.121||0.721 0.008||0.182 0.005||0.304 0.008|
|TaxoExpan||665.409 137.250||0.406 0.056||0.085 0.018||0.095 0.004|
|TMN||615.021 166.375||0.423 0.056||0.110 0.021||0.124 0.009|
|GenTaxo||6046.363 439.305||0.155 0.010||0.094 0.019||0.141 0.079|
|TaxoEnrich-S||217.842 5.230||0.481 0.071||0.162 0.082||0.294 0.031|
|TaxoEnrich-Sib||122.144 3.219||0.513 0.006||0.138 0.000||0.224 0.001|
|TaxoEnrich-S||73.680 1.346||0.545 0.002||0.154 0.006||0.251 0.016|
|TaxoExpan||Raw Embedding||3360.343 6.126||0.000 0.000||0.001 0.001|
|TaxoExpan||Raw + PGAT||823.075 114.638||0.030 0.002||0.132 0.010|
|TMN||Raw Embedding||636.254 36.465||0.036 0.005||0.156 0.008|
|TMN||Raw + LSTM + PGAT||436.319 13.128||0.056 0.001||0.245 0.006|
|TaxoEnrich-S||Raw Embedding||103.016 6.589||0.145 0.004||0.236 0.010|
|TaxoEnrich-S||Raw + LSTM||73.680 1.346||0.154 0.006||0.251 0.024|
|TaxoEnrich-S||Raw + LSTM + PGAT||100.188 2.214||0.150 0.004||0.244 0.001|
|Query||Top 5 Predicted Positions , True Positions in Red||Ranking of True Positions|
|TMN||530, 634, 4884|
|TaxoEnrich||all pairs testing||1, 2|
|TaxoEnrich||sensor hub||1, 3, 4|
|TMN||1, 78, 3|
5.3. Ablation Studies
In this section, we conduct the ablation studies on the major components of TaxoEnrich framework: 1) Incorporation of sibling information separated from embedding generation; 2) The implementations of different feature encoder models to capture different structural information. Note that, in the ablation study experiment, we used TaxoEnrich-S model for more direct and simpler comparison between the embeddings and modules. Additional ablation studies about hyperparameters are presented in appendix.
5.3.1. The Effectiveness of Query-Aware Sibling Encoder.
In this section, we further discuss the effectiveness of approaches of incorporating sibling information in our framework. We argue that simple including sibling information in embeddings would actually introduce noisy information to the framework. In many cases, some high-level concepts, such as “Artificial Intelligence” or “Machine Learning” in MAG-CS taxonomy, have thousands of children. Therefore, it is unrealistic to consider all siblings, or unreasonable to randomly consider some of them. Thus, we conduct experiments to verify this assumption by randomly selecting at most 5 siblings in the process of embedding generation. However, as shown in Table 4, such operation would not only prevent the framework from recognizing correct positions, but increase the embedding generation time to the three times of the original in the implementation.
5.3.2. Different Feature Encoders in TaxoEnrich
We will continue to examine the superior performance of TaxoEnrich with different feature encoders and the effectiveness of such methods on other frameworks. The techniques of encoding features before matching have been experimented by previous methods (Shen et al., 2020; Yu et al., 2020), showing that the neighboring terms of the candidate position would better utilize the structural information. We implement different feature encoding: the raw embeddings, LSTM encoding, PGAT encoding, and the combinations of these three, i.e., concatenating the output of encoders as input for matching module, for comparison in this section. Through experiments in Table 5, we can see that, the encoded features will improve the performance of all frameworks by a large margin, and TaxoEnrich can still outperform other methods regardless of encoded embeddings of concept nodes.
5.4. Case Studies
We demonstrate the effectiveness of TaxoEnrich framework by predicting true positions of several query concepts in MAG-CS datasets in Table 6. For high-level concepts like “heap”, TaxoEnrich ranks all the true positions at top 5, while TMN can only identify part of the true positions’ information, like (“algorithm, leaf”) for “heap”. And for those leaf nodes, such as “all pairs testing” and “sensor hub”, we can observe that TMN will have much better performance. However, it will include some coarse high-level concepts such as “machine learning” and “artificial intelligence”. In general, we can see TaxoEnrich works better than baselines for recovering true positions, and the top predictions by TaxoEnrich generally follow reasonable consistency.
6. Related Work
Automatic taxonomy construction is a long-standing task in the literature. Existing taxonomy construction methods leverage lexical features from the resource corpus such as lexical-patterns (Nakashole et al., 2012; Jiang et al., 2017; Hearst, 1992; Agichtein and Gravano, 2000)et al., 2018; Zhang et al., 2018a; Jin et al., 2018; Luu et al., 2016; Roller et al., 2014; Weeds et al., 2004) to construct a taxonomy from scratch. However, in many real-world applications, some existing taxonomies may have already been laboriously curated and are deployed in online systems, which calls for solutions to the taxonomy expansion problem. To this end, multitudinous methods have been proposed recently to solve the taxonomy expansion problem (Manzoor et al., 2020; Shen et al., 2020; Yu et al., 2020; Mao et al., 2020). For example, ARBORIST (Manzoor et al., 2020) studies expanding taxonomies by jointly learning latent representations for edge semantics and taxonomy concepts; TaxoExpan (Shen et al., 2020) proposes position-enhanced graph neural networks to encode the relative position of terms and a robust InfoNCE loss (Oord et al., 2018); STEAM (Yu et al., 2020) re-formulates the taxonomy expansion task as a mini-path-based prediction task and proposes to solve it through a multi-view co-training objective. Some other methods were proposed for taxonomy completion, such as TMN (Zhang et al., 2021) focuses on taxonomy completion task with channel-gating mechanism and triplet matching network; and GenTaxo (Zeng et al., 2021) collects information from complex local-structure information and learns to generate concept’s full name from corpus.
In this paper, we proposed TaxoEnrich to enhance taxonomy completion task with self-supervision. It captures the hierarchical and semantic information of concept nodes based on the taxonomic relations in the existing taxonomy. Additionally, the selective query-aware attention module and elaborately designed matching module further improves the performance of learning relatedness between query node and candidate position. Extensive experimental results elucidated the effectiveness of TaxoEnrich by showing that it largely outperforms the previous methods achieving state-of-the-art performance on both taxonomy completion and expansion tasks.
- Snowball: extracting relations from large plain-text collections. In DL ’00, Cited by: §6.
- SciBERT: a pretrained language model for scientific text. External Links: Cited by: §3.1.
- Taxonomy induction using hypernym subsequences. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1329–1338. Cited by: §1.
- Automatic acquisition of hyponyms from large text corpora. In Coling 1992 volume 2: The 15th international conference on computational linguistics, Cited by: §1, §6.
- Corel: seed-guided topical taxonomy construction by concept learning and relation transferring. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1928–1936. Cited by: §1.
- MetaPAD: meta pattern discovery from massive text corpora. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Cited by: §6.
Junction tree variational autoencoder for molecular graph generation. In ICML, Cited by: §6.
- Adam: a method for stochastic optimization. External Links: Cited by: §A.2.
- Medical subject headings (mesh). Bulletin of the Medical Library Association 88 (3), pp. 265. Cited by: §1.
- Automatic taxonomy construction from keywords. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1433–1441. Cited by: §1.
- Learning term embeddings for taxonomic relation identification using dynamic weighting neural network. In EMNLP, Cited by: §6.
- Expanding taxonomies with implicit edge semantics. In Proceedings of The Web Conference 2020, pp. 2044–2054. Cited by: §A.1, §1, §1, 3rd item, §6.
End-to-end reinforcement learning for automatic taxonomy induction. arXiv preprint arXiv:1805.04044. Cited by: §1, §6.
- Octet. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. External Links: Cited by: §1, §6.
- PATTY: a taxonomy of relational patterns with semantic types. In EMNLP, Cited by: §6.
- Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Cited by: §A.1, 2nd item, §6.
- Inclusive yet selective: supervised distributional hypernymy detection. In COLING, Cited by: §6.
- Hearst patterns revisited: automatic hypernym detection from large text corpora. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Melbourne, Australia, pp. 358–363. External Links: Cited by: §3.1.
- TaxoExpan: self-supervised taxonomy expansion with position-enhanced graph neural network. In Proceedings of The Web Conference 2020, pp. 486–497. Cited by: §A.1, §1, §1, §2, §3.1, §3.4.1, 1st item, 2nd item, §4.1, §4.3, §5.3.2, §6.
- Entity set search of scientific literature: an unsupervised ranking approach. External Links: Cited by: §1.
- An overview of microsoft academic service (mas) and applications. In Proceedings of the 24th international conference on world wide web, pp. 243–246. Cited by: §1, 1st item.
- Reasoning with neural tensor networks for knowledge base completion. In Advances in neural information processing systems, pp. 926–934. Cited by: §3.4.
- Who should go first? a self-supervised concept sorting model for improving taxonomy expansion. arXiv preprint arXiv:2104.03682. Cited by: §1.
- Modelling relational data using bayesian clustered tensor factorization. Cited by: 1st item.
- Wikidata: a new platform for collaborative data collection. In Proceedings of the 21st International Conference on World Wide Web, WWW ’12 Companion, New York, NY, USA, pp. 1063–1064. External Links: Cited by: §1.
- A phrase mining framework for recursive construction of a topical hierarchy. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 437–445. Cited by: §1.
- Characterising measures of lexical distributional similarity. In COLING, Cited by: §6.
Co-embedding network nodes and hierarchical labels with taxonomy based generative adversarial networks. In 2020 IEEE International Conference on Data Mining (ICDM), pp. 721–730. Cited by: §1.
Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 974–983. Cited by: 2nd item.
- STEAM: self-supervised taxonomy expansion with mini-paths. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1026–1035. Cited by: §1, §1, §3.1, §5.3.2, §6.
- Enhancing taxonomy completion with concept generation via fusing relational representations. arXiv preprint arXiv:2106.02974. Cited by: §1, §1, §3.1, §3.1, 5th item, §6, footnote 1.
- TaxoGen: constructing topical concept taxonomy by adaptive term embedding and clustering. In KDD, Cited by: §6.
- TaxoGen: unsupervised topic taxonomy construction by adaptive term embedding and clustering. arXiv preprint arXiv:1812.09551. Cited by: §1.
- Taxonomy completion via triplet matching network. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 4662–4670. Cited by: §A.1, §1, §1, §1, §2, §3.1, §3.4.1, §3.4, 4th item, §4.1, §4.3, Table 2, §6.
Appendix A Implementation Details
a.1. Baseline Models
TaxoExpan, ARBORIST (Shen et al., 2020; Manzoor et al., 2020) were designed for taxonomy expansion task. We follow the implementations in (Zhang et al., 2021) to calculate the ranking of candidate positions from the single score output of their matching model, so that we can have similar output for evaluations. For TaxoExpan (Shen et al., 2020), we implemented the full framework with PGAT propagation method and InfoNCE Loss (Oord et al., 2018). In comparison experiments in (Zhang et al., 2021), all methods only leveraged the initial embeddings without any distribution models for matching model comparison. For TMN , we implemented with Raw embedding + LSTM and PGAT encoders for the full comparison, based on the original triplet matching network. And for GenTaxo, we used the same distribution model as in TaxoEnrich.
Note that in the previous methods, such as TaxoExpan, ARBORIST and TMN , the generation of the initial embeddings was from the static word embedding method. In MAG datasets, the embeddings of each concept node is computed using Word2Vec method to generate a 250-dimensional vectors. And in WordNet datasets, the embeddings were generated using FastText as a 300-dimensional vector.
a.2. Hyperparameter Settings
In the implementation of TaxoEnrich, we use Adam optimizer (Kingma and Ba, 2017)
with learning rate 0.001. We applied a scheduler which multiplies the learning rate by a factor of 0.5 after 10 epochs of non-improving metrics. The hidden dimension for LSTM encoders is set as 500 and the number of bilinear modelsin the matching module is 10. The number of siblings selected in the attention module is set as 5 for training and 20 for testing. The model is then trained with 200 epochs with early stop if the MR metric has not improved for more than 10 epochs on validation dataset. For other hyperparameters, we set in all datasets to avoid heavy parameter tuning, batch size as 16 for both TaxoEnrich and TaxoEnrich-S .
Appendix B Additional Ablation Studies
b.1. Hyperparameter Tuning
We conduct additional ablation studies on hyperparameter searching. We examine the influence of batch size of TaxoEnrich-S framework. It turns out that the batch size with 16 tends to be better than others.
Experiments on the learning rate of the newly incorporated sibling loss is also explored. We can observe that and will result in slightly better performance for MAG-CS datasets, as the information in siblings will still introduce noises if we treat is equally with parent and children relatedness.
b.2. Sentence Encoder Studies
The comparison between different pretrained language models for sentence encoders is also studied under settings described above. And the results are shown in Table 7. We can see that SciBERT achieves the best performance among all language models, and Transformer has very similar results. And BERT has relatively poor performance. The reason may be that BERT is pretrained on general domain, making it less accurate in representing scientific domain-specific concepts in MAG-CS datasets.