DeepAI
Log In Sign Up

Towards Combinational Relation Linking over Knowledge Graphs

10/22/2019
by   Weiguo Zheng, et al.
0

Given a natural language phrase, relation linking aims to find a relation (predicate or property) from the underlying knowledge graph to match the phrase. It is very useful in many applications, such as natural language question answering, personalized recommendation and text summarization. However, the previous relation linking algorithms usually produce a single relation for the input phrase and pay little attention to a more general and challenging problem, i.e., combinational relation linking that extracts a subgraph pattern to match the compound phrase (e.g. mother-in-law). In this paper, we focus on the task of combinational relation linking over knowledge graphs. To resolve the problem, we design a systematic method based on the data-driven relation assembly technique, which is performed under the guidance of meta patterns. We also introduce external knowledge to enhance the system understanding ability. Finally, we conduct extensive experiments over the real knowledge graph to study the performance of the proposed method.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/11/2018

EARL: Joint Entity and Relation Linking for Question Answering over Knowledge Graphs

In order to answer natural language questions over knowledge graphs, mos...
12/24/2019

FALCON 2.0: An Entity and Relation Linking framework over Wikidata

Natural Language Processing (NLP) tools and frameworks have significantl...
12/24/2019

FALCON 2.0: An Entity and Relation Linking Tool over Wikidata

Natural Language Processing (NLP) tools and frameworks have significantl...
09/16/2020

Leveraging Semantic Parsing for Relation Linking over Knowledge Bases

Knowledgebase question answering systems are heavily dependent on relati...
03/05/2018

Linking ImageNet WordNet Synsets with Wikidata

The linkage of ImageNet WordNet synsets to Wikidata items will leverage ...
05/04/2020

Understanding Scanned Receipts

Tasking machines with understanding receipts can have important applicat...
08/25/2018

A short exposition of S. Parsa's theorem on intrinsic linking and non-realizability

We present a short statement and proof of the following result by S. Par...

1 Introduction

Knowledge graphs have been important repositories to materialize a huge amount of structured information in the form of triples, where a triple consists of subject, predicate, object or subject, property, value. There have been many such knowledge graphs, e.g., DBpedia [1], Yago [26], and Freebase [5]. In order to bridge the gap between unstructured text (including text documents and natural language questions) and structured knowledge, an important and interesting task is conducting relation linking over the knowledge graph, i.e., finding the specific predicates/properties from the knowledge graph that match the phrases detected in the sentence (also may be a question).

Relation linking can power many downstream applications. As a friendly and intuitive approach to exploring knowledge graphs, using natural language questions to query the knowledge graph has attracted a lot of attentions in both academia and industrial communities [4, 2, 8, 15, 16]. Generally, the simple questions, e.g., who is the founder of Microsoft, are easy to answer since it is straightforward to choose the predicate “founder” from the knowledge graph that matches the phrase “founder” in the input question. However, many questions are difficult to deal with due to the intrinsic variability and ambiguity of natural language.

Running Example 1.

Let us consider the question “Who is the mother-in-law of Barack Obama?”. It may be hard to answer when there is no predicate/property that directly matches the phrase “mother-in-law”. Acutually, the combinational predicates “mother” and “spouse” should be inferred as matches. Precisely, it can be represented as the mother of one’s spouse as depicted in Figure 1, where the dash line does not exist in the underlying knowledge graph.

For ease of presentation, we do not distinguish predicates and properties in the following discussion unless it is necessary. Besides natural language question answering, relation linking can be helpful to many other applications such as personalized recommendation [7] and text summarization [29].

Figure 1: Example of combinational relations matching the compound phrase mother-in-law.

Intuitively, finding the mapping predicates for input phrases can be considered a similarity search problem. Specifically, delivering the results by computing the similarity (or some distance measurements) between phrases and candidate predicates. Traditionally, the Levenshtein distance is used to measure the difference between two strings [19]. However, a predicate should link to the phrase although their surface form distance is large. For instance, the predicate “spouse” matches the phrase “married to”, “wife of” or “husband of”, but their Levenshtein distance is large. Moreover, it fails to distinguish two literally similar strings but describe different semantic meanings, e.g., attitude and latitude. In order to overcome the two problems above, word embedding models are widely used to improve the relation linking performance. There are also some works resorting to external taxonomies like Wordnet111https://wordnet.princeton.edu/. The synonyms, hyponyms, and variations are extracted to enhance the matching from phrases to predicates in the knowledge graph [3, 22]. Another stream of researches perform entity linking (identifying the entity from the target knowledge graph that matches the input phrase) and relation linking as a joint task rather than taking them as separate tasks [32, 9, 33].

However, the existing relation linking systems aim to extract one predicate to match the input phrase, which may decrease the overall performance when multiple predicates are required to match a single phrase. For instance, as shown in the Running Example 1, the phrase “mother-in-law” matches a path in the knowledge graph. For simplicity, a phrase is called a compound phrase if it can be grounded to a group of predicates or properties which as a whole match the phrase . To enhance the ability of understanding compound phrases in the view of knowledge graphs, we study the problem of combinational relation linking in this paper. Specifically, finding a subgraph pattern from the knowledge graph to match the input phrase. Notice that we just focus on the relation linking task and do not take entities into consideration as the entities may be unavailable in the input text or query. For example, a natural language question or keyword query is not required to contain entities.

Challenges and Contributions. Actually, the traditional relation linking is a special case of our proposed combinational relation linking since only one edge pattern (i.e., the predicate/property) is detected. Nevertheless, the algorithms designed for traditional relation linking cannot be used to solve the combinational relation linking directly. In order to preform combinational relation linking, it is required to address the following two challenges.

Challenge One: The gap between the phrase and the desired subgraph pattern. Different from the single edge pattern, the desired mapping for the compound phrase is a subgraph pattern. In contrast, the input phrase consist of a sequence of words or even a single word, e.g., the phrase “grandfather” may match the subgraph pattern “”. Thus we need to devise an effective mechanism to bridge the gap.

Challenge Two: It is difficult to determine how many predicates/properties in a match. The target is to infer a subgraph pattern for the input compound phrase, but it is unknown that what the matched pattern is and how many edges (predicates and properties) the pattern contains, which increases the difficulty of conducting combinational relation linking.

Let us consider the process of manually performing relation linking. When the expert does not understand the input compound phrase, she/he may resort to some dictionary or search engine to make it clear. Inspired by the process of manual relation linking, we propose to use external knowledge to bridge the representation gap between phrases and subgraph patterns. The external knowledge, e.g., Oxford Dictionary API222https://developer.oxforddictionaries.com/ or Wikipedia333https://www.wikipedia.org/, is invoked to interpret the phrase when is not understood by the system. Even if the phrase can be better understood by employing the side information, it remains a challenging problem to ground the phrase to a subgraph pattern since the structure is oblivious. In order to determine the subgraph pattern, we design a group of meta patterns based on which the target subgraph pattern can be retrieved in a recursive manner.

In summary, we make the following contributions in this paper:

  • We design a systematic method to resolve the problem of combinational relation linking over knowledge graphs;

  • We propose to use external knowledge to facilitate combinational relation linking;

  • A recursive relation assembly technique based on meta patterns is devised to enhance the linking;

  • Experimental results on two benchmarks show that our approach outperforms state-of-the-art algorihtms.

The rest of this paper is organized as follows. Section 2 introduces the problem definition and framework of the approach. Section 3 introduces how to integrate external knowledge and defines several meta patterns. Section 4 presents the process of recursive relation assembly based on meta patterns. The experimental results are provided in Section 5, followed by a brief review of related work in Section 6. Finally, Section 7 concludes the paper.

2 Problem Definition and Framework

Figure 2: Framework of the approach.

2.1 Problem Definition

In this section, we first review some basic notions and then give the framework of the approach. In the paper, the knowledge graph is defined as Definition 2.1. For instance, Oswald Lange, birthPlace, Germany is a triple in DBpedia. There is a special predicate “type” for each entity whose object is a type (e.g., Actor or Movie).

Definition 2.1.

(Knowledge graph, denoted by ). A directed graph consisting of triples subject, predicate, object or subject, property, value, where subjects/objects are entities, and predicates/properties are relations.

Definition 2.2.

(Subgraph pattern). A subgraph pattern corresponds to a subgraph of the knowledge graph , where each node is labeled as its type if corresponds to an entity in .

Figure 1 presents a subgraph pattern. Note that the node of subgraph pattern is not necessary to correspond to an entity in . For example, the node can be a literal.

Definition 2.3.

(Compound phrase). A phrase is called a compound phrase with regard to the knowledge graph if can match a subgraph pattern in .

As shown in Definition 2.3, the compound phrase is a relative term in terms of the target knowledge graph . For instance, the phrase “mother-in-law” is not compound if the underlying knowledge graph contains the corresponding predicate that directly describes this relation. Thus the task of the paper is defined as next.

Problem Statement 1.

Given a compound phrase and the underlying knowledge graph , extracting the combinational relations (including predicates and properties), i.e., a subgraph pattern, from to match the phrase .

2.2 Framework

The overview of the proposed approach is depicted in Figure 2

. It consists of two components, i.e., meta pattern recognition and compound phrase linking.

In the first component, we introduce a group of meta patterns that can power the relation linking. Given a compound phrase, we can obtain its concrete meanings through external knowledge, e.g., a sentence interpreting the phrase. Then the meta pattern of the interpretation sentence is recognized.

In the second component, the subgraph pattern is constructed by filling the slots (nodes and edges) in the meta pattern generated above. The construction proceeds in a recursive manner.

3 Meta Pattern Recognition

In this section, we first perform compound phrase understanding, and then define the meta patterns. Finally, we discuss how to recognize the meta pattern of an interpretation sentence for the input compound phrase.

3.1 Compound phrase understanding

Due to the gap between unstructured natural language and knowledge graph , the compound phrase may not directly map a subgraph in . We adopt external knowledge, e.g., Wikipedia, OXford Dictionary API, and Cambridge Dictionary API444https://dictionary.cambridge.org/zhs/, to explain these relation phrases into simple sentences which describe concrete meanings of the phrases. For instance, Wikipeida gives the explanation of “mother-in-law” as: “A mother-in-law is the mother of a person’s spouse”. It is clear that the explanation provides more information of the input phrase, which is helpful to the extraction of desired subgraph pattern.

3.2 Meta Pattern

In this subsection, we introduce some meta patterns which could facilitate the relation linking task. As discussed above, it is a challenging task to determine the structure of the desired subgraph pattern directly. To resolve the problem, we propose a recursive assembly mechanism to construct the match based on several meta patterns. In principle, the meta patterns are very limited and can be enumerated in advance.

Definition 3.1.

(Meta pattern). A meta pattern consists of two edges at most, where all the nodes and edges are unlabeled.

There are four meta patterns as presented in Figure 3, where pattern represents a progressive relationship, e.g. the pattern in Figure 1 describes the compound phrase “mother-in-law”; pattern represents a converging coordinative relation, e.g. the phrase “kinfolk” (person from the same family); pattern represents a diverging coordinative relation, .e.g. the phrase “sportsman” (a gender who plays sport).

Generally, each subgraph pattern can be assembled based on these meta patterns. Note that the pattern corresponds to the traditional relation linking that deliver a single predicate or property in the knowledge graph. Actually, each subgraph pattern can be assembled through only pattern . However, it will increase the difficulty of inferring the structure for the interpretation sentence, which decreases the linking performance in further. On the other hand, larger meta patterns (e.g., a mete pattern consists of three or more edges) rarely occur in an explanation sentence directly. Moreover, larger meta patterns are difficult to recognize. Therefore, two-size meta patterns are good balance of representation ability and recognition difficulty.

Figure 3: Meta patterns.

3.3 Meta Pattern Classification

Since there are only four meta patterns, recognizing the meta pattern of an explanation sentence can be taken as a classification problem. Since pattern corresponds to a single predicate or property, it is can be identified through traditional relation linking algorithms. Hence, we just consider how to determine the other three meta patterns in the following discussion.

As we know that there have been many classification models available, e.g., RNN Attention [31] and Text CNN [17], it is not the focus of this paper. One important issue is to collect training data, i.e., compound phrases, the corresponding explanation sentences, and the matched subgraph patterns. To best of our knowledge, there is no such training dataset yet. Since the knowledge graph may contain millions of triples, it is not a trivial task to manually build the training dataset. Therefore, we propose a data-driven approach to collecting training examples at a low cost.

Training data collection - a data-driven approach. First of all, we need to address the challenge that how to conceive a compound phrase. Actually, titles of Wikipedia webpages provide a huge number of phrases, such as “brother”, “family man”, and “hometown”. However, there may be noisy data as the titles may correspond to entities (e.g., Ludwig van Beethoven) or even cannot be matched in the knowledge graph (e.g., Dominican Order). Thus these phrases can be directly discarded. Algorithm 1 presents the details of collecting training examples. The external dictionary API is invoked to provide an explanation of the phrase. If there are just two relations and extracted from to match the simple phrases in , we need to check whether they are directly connected in . In order to avoid the ambiguous matches, the corresponding subgraph pattern is added into when and can form only one pattern. The relation linking of simple phrases will be discussed in the next section. The procedure proceeds until the number of identified training examples exceeds a threshold .

After obtaining the automatically generated training examples, it is easy to refine these results. Actually, the method of constructing training data above can be roughly used to perform combinational relation linking. It is also compared as a baseline in our experiments.

Input: Wikipedia titles, dictionary API and knowledge graph
Output: Training examples

1:  
2:  for each Wikipedia title  do
3:     if  then
4:        return  
5:     if  matches a/an property/predicate/type/entity in  then
6:        continue
7:      the explanation sentence of by invoking dictionary API
8:      the relations corresponding to simple phrases in
9:     if  then
10:        if  and are adjacent in and form only one pattern then
11:            subgraph pattern consisting of and
12:  return  
Algorithm 1 Data-driven Relation Linking

Meta pattern classification.

The state-of-the-art model RNN Attention model

[31] is adopted in the experiments. Note that the aim is to infer the meta pattern of the sentence with regard to a specific knowledge graph. Generally, it may produce distinct meta patterns even for the identical sentence when the underlying knowledge graphs are different. In order to make the explanation sentence fit the model better, we include some features that depend on the underlying knowledge graph, i.e., perform relation mask over the sentence. For ease of presentation, if a phrase directly matches a single predicate or property, it is called a simple phrase. Given an explanation sentence, we identify all the simple phrases and replace them with their matching predicates in the knowledge graph. Since the original predicate IRIs may be too long and contains some noisy notations, we remove the prefix of each IRI and use a special symbol “*” to take the place of prefix. At the same time, the special symbol denotes the beginning of a relation (predicate or property). In other words, the input of the model is a variegated text.

Example 1.

Let us consider the explanation sentence “a male child”. We can obtain “a foaf555http://xmlns.com/foaf/0.1/:gender dbo666http://dbpedia.org/ontology/:child” through relation linking. Replacing the prefix with “*” leads to the sentence “a *gender *child”. Feeding it to the classification model, we can obtain the progressive pattern .

4 Compound Phrase Linking

In this section, we first present the techniques of meta pattern filling (Section 4.1) and then conduct relation assembly according to the recognized meta patterns (Section 4.2).

4.1 Meta Element Detection

Since the desired subgraph pattern consists of types and relations (predicates or properties), we identify the meta elements including both node labels (i.e., types) and edge labels (i.e., relations) in this subsection.

(1) Type Restriction. Type restriction step is to recognize type mention from the explanation sentence and link them to the knowledge graph . The identified type will be the restriction of the pattern node. Actually, type is also relation. In this paper, we retrieve all type IRIs from the given knowledge graph and materialize them as a type dictionary. Each candidate phrase mention in the explanation sentence is enumerated to search over the type dictionary.

(2) Simple Phrase Linking. This step is to extract relation mentions from simple relation sentences and map these relation mentions to the knowledge graph . These relations correspond to the edges of meta patterns. There are a variety of resources and systems for single relation linking. SIBKB [25] provides searching mechanisms for linking natural language relations to knowledge graphs. BOA [13]

can be used to extract natural language representations of predicates independent of the language if provided with a Named Entity Recognition service. ReMatch

[22] is an independently reusable tool for matching natural language relations to knowledge graph properties. EARL [9] is a recent approach for joint entity and relation linking which treats entity and relation linking as a single step. It determines the best semantic connection between all keywords of the question by exploiting the connection density between entity and relation candidates. All of these tools can be used to conduct single relation linking. In this paper, we ground the relation mentions extracted from the phrase explanation sentence to the predicates/properties through based on SIBKB.

Example 2.

Let us consider the compound phrase “mother-in-law” and its explanation “the mother of a person’s spouse”. We can extract type keyword “person” and link it to the type “dbo:Person” in the knowledge graph (DBpedia). It acts the type restriction of a node in the meta pattern. Conducting single relation linking, we can extract relation mentions “mother” and “spouse” from this sentence, and link them to “dbo:mother” and “dbo:spouse”, respectively. They correspond to edges in the meta patterns.

4.2 Relation Assembly

Input: Phrase explanation sentence
Output: Subgraph pattern

1:   meta elements in
2:  if  contains compound phrase  then
3:      explanation sentence of
4:      CompoundPhraseLinking
5:      Replace with in
6:  else if  then
7:     return  
8:  else if  then
9:      meta pattern of
10:      Data-driven and meta pattern checking
11:     return  
12:  return  “No match”
Algorithm 2 CompoundPhraseLinking()

With the recognized meta elements and meta pattern, we are ready to produce the subgraph pattern. A naive approach is performing relation assembly following a data-driven paradigm. Specifically, the subgraph pattern is constructed by retrieving the subgraphs that cover all the meta elements, which is similar to the procedure in Algorithm 1. However, it may produce ambiguous subgraphs, i.e., distinct subgraphs cover the meta elements and follow the meta pattern. Thus the overall linking performance will degrade correspondingly.

Meta Pattern Assembly. In order to address the problem above, we propose a novel approach to assembling relations under the guidance of meta patterns and the underlying knowledge graph. Algorithm 2 depicts the process. Unlike the ways mentioned above, we add meta patterns to restrict subgraphs which can improve the precision of data-driven method. Given the phrase explanation sentence , we infer its meta pattern through the classification model as shown in Section 3

. Based on the meta pattern, we assemble the recognized meta elements (including types and relations) as a subgraph one by one in the order they appear in the explanation sentence. If the assembled subgraph pattern matches a subgraph in the knowledge graph, it will be delivered as the result; Otherwise, we will modify the order of the recognized relations and perform the similar checking above.

Example 3.

Let us consider the running example. By applying the pattern classification model, we can infer the pattern of explanation sentence “the mother of a person’s spouse” is the progression pattern as presented in Figure 3. Then we can assemble relations “dbo:mother” and “dbo:spouse” derived from meta element detection step according to the progression pattern. Thus there are two assembled subgraphs x dbr:mother z, z dbr:spouse y and x dbr:spouse z, z dbr:mother y. Nevertheless, based on the meta pattern constraint, we can infer that the compound phrase “mother-in-law” can be represented as person dbr:spouse person, person dbr:mother person.

Nested Pattern Assembly. Some explanation sentences not only contain simple phrases, but also compound phrases. The compound phrase in the explanation sentence is called a “nested phrase”. As shown in Algorithm 2, the nested compound phrase is parsed recursively.

Example 4.

Let us consider the compound phrase “great-grandparent”. Its explanation sentence is “a parent of your grandparent”, which contains another compound phrase “grandparent”. So we need to parse “grandparent” first. Based on meta pattern assembly, we can infer the subgraph pattern of “grandparent” is person dbo:parent person, person dbo:parent person. Then, “grandparent” is taken as a new simple relation “dbo:grandparent”. Then we can identify the left modified explanation sentence with the classification model. It follows the progressive pattern as well. Finally, we can deliver the subgraph pattern person dbo:parent person, person dbo:parent person, person dbo:parent person for the phrase “great-grandparent”.

5 Experimental Study

The proposed approach is systematically studied in this section over real datasets. Section 5.1 presents the experimental settings, followed by the results in Section 5.2.

5.1 Experimental Settings

Datasets. To evaluate the performance of our approach, we collect some compound phrases based on Wikipedia webpage titles. The details of collecting the training examples are described in Section 3.3. In the experiments, DBpedia is adopted as the knowledge graph. Finally, we collect 600 compound phrase, where 500 phrases are used to train the model and 100 phrases are used to test the performance. Beyond that, we also collect 100 simple phrases to evaluate the effect of the proposed external knowledge and meta patterns. All of these collected data will be released once the review process is complete.

Competitors. To evaluate the performance of proposed approach, we compare it with the following competitors.

  • Keyword Match: It just simply matches the compound phrases to all predicates in the knowledge graph. A predicate will be delivered once it matches the input phrase.

  • SIBKB [25] provides searching mechanisms for linking natural language relations to knowledge graphs.

  • Similarity Search: It calculates the similarity between the compound phrase and each predicate and then returns the best predicate with the highest similarity.

  • Data-driven linking: Similar to our approach, it is equipped with external knowledge and exploits the data-driven approach to retrieve subgraph patterns. The only difference is that it works without the guidance of meta patterns.

Evaluation metrics. We evaluate the effectiveness (precision, recall, and F1-measure) and efficiency (the response time from receiving a phrase to delivering its matches) of the methods.

5.2 Experimental Results

Method Precision Recall F-score
Without relation mask 0.73 0.78 0.72
With relation mask 0.90 0.88 0.86
Table 1: Effect of relation mask on meta pattern classification
Method Precision Recall F-score
Keyword Match 0.050 0.025 0.033
Similarity Search 0.167 0.083 0.094
SIBKB 0.050 0.050 0.048
Data-driven Linking 0.167 0.808 0.150
Our approach 0.65 0.625 0.633
Table 2: Results of competitors and our approach on compound phrase linking
Method Precision Recall F-score
Without Explanation 0.20 0.175 0.183
With Explanation 0.80 0.775 0.783
Table 3: Evaluation of external knowledge on simple phrase linking

Evaluation of meta pattern classification. As discussed in Section 3.3, we include some features that depend on the underlying knowledge graph, i.e., replacing phrase mentions with the corresponding relations. Table 1

shows its effect of the performance of classifying meta patterns. The precision of the meta pattern classification can achieve 0.90 when is equipped with relation mask. Moreover, the recall improves as well. It indicates performing relation mask that takes advantage of the target knowledge graph is very effective as the F-score achieves 0.16 gain.

Results of competitors and our approach. Table 2

shows the results of the four competitors and our approach. We can find that methods keyword match and similarity search performs very poorly with low precision and recall. That is because they can just link a phrase to a single predicate or property. However, most compound phrases match subgraph patterns with multiple edges rather than a single relation pattern directly. Another reason of SIBKB performing poorly is that it depends on PATTY database to find out synonyms for relation keywords. However, PATTY database contains a very limited number of synonyms. The performance will degrade greatly once it does not contain the compound phrases. Though data-driven linking achieves a relatively high recall, its precision rather low, which the overall F-score correspondingly. In contrast, our approach powered by meta patterns performs much better than the data-driven linking as it achieves a good balance between precision and recall.

Evaluation of external knowledge on simple phrase linking. In order to study the importance of external knowledge, i.e., obtaining the concrete meanings of the input compound phrase, we also evaluate its effect on simple phrase linking that extracts a single relation for an input simple phrase. As shown in Table 3, it is clear that the performance of relation linking enhanced with explanation outperforms that does not considering external knowledge significantly. Hence, exploiting external knowledge is helpful to both simple and compound phrase linking.

Response time of different methods. We also report the time cost of each method as presented in Table 4, where the response time is averaged over 100 testing compound phrases. We can see that keyword match runs the fastest as it only performs exact matching computation. Without the guidance of meta patterns, the data-driven linking faces a larger search space. Thus it consumes more time than the other methods, which can further illustrate the superiority of meta patterns.

Method Time cost (sec)
Keyword Match 0.161
Similarity Search 1.661
Data-driven Linking 3.065
Our approach 0.377
Table 4: Response time of different methods

Error Analysis. In order to improve the compound phrase linking in the future, we analyze the results and categorize the errors into three groups, i.e., errors in meta pattern classification, relation assembly and meta element identification.

Classification errors. As our proposed method highly depends on meta patterns, the final result will be incorrect when the predicted pattern is false. For example, let us consider the compound phrase “countrywoman” in the experiments. Its explanation sentence is “a woman from your own country”. The predicted pattern by the classification model is the progressive pattern . Based on the meta pattern , we can obtain the assembled subgraph pattern {x dbo:country z, z foaf:gender y}. However, the correct meta pattern of the sentence should be the diverging coordinate pattern , and the desired subgraph pattern is {x dbo:country z, x foaf:gender y}.

Relation assembly errors. Relation assembly is not a trivial task especially for nested compound phrases. For example, the explanation sentence of the compound phrase “co-sister” is “the wife of your husband’s brother” (denoted by ), where “brother” is also a compound phrase with regard to DBpedia. The explanation of “brother” is “a male sibling” (denoted by ). By performing relation assembly, the system delivers the subgraph pattern {x dbo:relative z, z foaf:gender y } for “brother”. Then “dbo:brother” is taken as a simple relation in sentence . Finally, the system returns the assembled subgraph pattern { dbo:spouse , dbo:relative , foaf:gender , dbo:spouse }. Since dbo:relative and dbo:spouse have assembled with progression pattern, not foaf:gender and dbo:spouse, the result is incorrect.

Errors caused by meta element identification. There may be some noisy types and relations in the detected meta elements, which increases difficulties for relation assembly as it is hard to distinguish them from the correct ones. For example, the explanation of phrase “stepmother” is “the woman who is married to someone’s father but who is not their real mother”, where relation “dbo:mother” is extracted as a match of the mention “mother”. Nevertheless, the relation dbo:mother should not be detected and used for the downstream relation assembly.

6 Related Work

Performing relation linking is highly related to knowledge graph completion [11]. Hence, we give a brief review of algorithms for relation linking and knowledge graph completion next.

6.1 Relation Linking

The previous work on relation linking can be divided into two groups, i.e., independent relation linking [23, 25, 22] and joint relation linking [9, 33, 24].

Independent relation linking. PATTY nakashole2012patty uses iterative bootstrapping strategies to extract RDF resources from unstructured text. However, PATTY cannot be used directly as a component for relation linking in a QA system and needs to be modified according to the application. SIBKB [25] uses PATTY as the underlying knowledge source and proposes a novel approach based on the semantic similarity between mentions and predicates/properities. ReMatch [22]

employs dependency parse characteristics with adjustment rules and then carries out a match against knowledge graph properties enhanced with the lexicon Wordnet. However, the time efficiency is relatively low for each question.

Joint relation linking. A bunches of approaches address entity linking and relation linking jointly [21, 28, 9, 33, 24]. EERL [33] computes relation candidates based on identified entities. Sakor et al. present an approach for jointly linking entities and relations within a short text into the entities and relations of DBpedia [24]. It uses the context of entities for finding relations and does not require training data. Miwa and Sasaki propose a history-based structured learning approach that jointly extracts entities and relations in a sentence [21]. EARL [9] determines the best semantic connection between all keywords of the question by exploiting the connection density between entity and relation candidates.

Most of the methods above can just perform simple phrase linking. Thus they exhibit poor performance to handle compound phrase linking that aims to extract a subgraph pattern from the knowledge graph.

6.2 Knowledge Graph Completion

Knowledge graph completion is proposed to predict the missing edges between any two entities in the knowledge graph. It is an alternative way to deal with compound phrase linking. A variety of algorithms performing knowledge graph completion have been proposed these years. The knowledge graph embedding based models, which embed entities and relations into a continuous space, are widely used, such as translation-based embedding models [6, 20] and manifold-based embedding model [14, 30, 10]. These methods suffer from the problem of result interpretation. Besides the embedding based approaches, there are a bunches of other methods, such as PRA models [18, 27] and GPAR models [12, 11].

Most of the algorithms designed for knowledge graph completion cannot be directly used for handling compound phrase linking since they can only predict relations that belong to a predefined relation dictionary. However, it is very likely that compound phrases correspond to out-of-vocabulary relations.

7 Conclusion

In this paper, we study the problem of finding a subgraph pattern to match the given compound phrase, which has received little attention up to now is not a trivial task. To bridge the gap between unstructured natural language and enhance the system understanding ability, we introduce external knowledge in the linking process. As relation linking highly depends on the underlying knowledge graph, we propose a data-driven relation assembly technique. More importantly, we define several meta patterns which can guide the relation assembly. The systematic empirical results show that the proposed approach outperforms the competitors significantly. It also confirms the effectiveness of the introduction of external knowledge and meta patterns.

References

  • [1] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. G. Ives (2007) DBpedia: A nucleus for a web of open data. In ISWC, Cited by: §1.
  • [2] J. Bao, N. Duan, Z. Yan, M. Zhou, and T. Zhao (2016) Constraint-based question answering with knowledge graph. In COLING, pp. 2503–2514. Cited by: §1.
  • [3] R. Beaumont, B. Grau, and A. Ligozat (2015) SemGraphQA@qald5: LIMSI participation at qald5@clef. In Working Notes of CLEF, Cited by: §1.
  • [4] J. Berant, A. Chou, R. Frostig, and P. Liang (2013) Semantic parsing on freebase from question-answer pairs. In EMNLP, Cited by: §1.
  • [5] K. D. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD, Cited by: §1.
  • [6] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pp. 2787–2795. Cited by: §6.2.
  • [7] R. Catherine, K. Mazaitis, M. Eskénazi, and W. W. Cohen (2017) Explainable entity-based recommendations with knowledge graphs. In Proceedings of the Poster Track of the 11th ACM Conference on Recommender Systems, Cited by: §1.
  • [8] R. Das, M. Zaheer, S. Reddy, and A. McCallum (2017) Question answering on knowledge bases and text using universal schema and memory networks. In ACL, pp. 358–365. Cited by: §1.
  • [9] M. Dubey, D. Banerjee, D. Chaudhuri, and J. Lehmann (2018) EARL: joint entity and relation linking for question answering over knowledge graphs. In ISWC, pp. 108–126. Cited by: §1, §4.1, §6.1, §6.1.
  • [10] T. Ebisu and R. Ichise (2018) Toruse: knowledge graph embedding on a lie group. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    ,
    Cited by: §6.2.
  • [11] T. Ebisu and R. Ichise (2019) Graph pattern entity ranking model for knowledge graph completion. arXiv preprint arXiv:1904.02856. Cited by: §6.2, §6.
  • [12] W. Fan, X. Wang, Y. Wu, and J. Xu (2015) Association rules with graph patterns. Proceedings of the VLDB Endowment 8 (12), pp. 1502–1513. Cited by: §6.2.
  • [13] D. Gerber and A. N. Ngomo (2011) Bootstrapping the linked data web. In 1st Workshop on Web Scale Knowledge Extraction@ ISWC, Vol. 2011. Cited by: §4.1.
  • [14] S. Guo, Q. Wang, B. Wang, L. Wang, and L. Guo (2015) Semantically smooth knowledge graph embedding. In

    Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

    ,
    Vol. 1, pp. 84–94. Cited by: §6.2.
  • [15] S. Hu, L. Zou, J. X. Yu, H. Wang, and D. Zhao (2018) Answering natural language questions by subgraph matching over knowledge graphs. IEEE Trans. Knowl. Data Eng. 30 (5). Cited by: §1.
  • [16] X. Huang, J. Zhang, D. Li, and P. Li (2019) Knowledge graph embedding based question answering. In WSDM, pp. 105–113. Cited by: §1.
  • [17] Y. Kim (2014) Convolutional neural networks for sentence classification. In EMNLP, Cited by: §3.3.
  • [18] N. Lao, T. Mitchell, and W. W. Cohen (2011) Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 529–539. Cited by: §6.2.
  • [19] V. I. Levenshtein (1966) Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady 10 (), pp. 707–710. Cited by: §1.
  • [20] Y. Lin, Z. Liu, H. Luan, M. Sun, S. Rao, and S. Liu (2015) Modeling relation paths for representation learning of knowledge bases. arXiv preprint arXiv:1506.00379. Cited by: §6.2.
  • [21] M. Miwa and Y. Sasaki (2014) Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1858–1869. Cited by: §6.1.
  • [22] I. O. Mulang, K. Singh, and F. Orlandi (2017) Matching natural language relations to knowledge graph properties for question answering. In SEMANTICS, pp. 89–96. Cited by: §1, §4.1, §6.1, §6.1.
  • [23] N. Nakashole, G. Weikum, and F. Suchanek (2012) PATTY: a taxonomy of relational patterns with semantic types. In EMNLP, Cited by: §6.1.
  • [24] A. Sakor, I. O. Mulang, K. Singh, S. Shekarpour, M. E. Vidal, J. Lehmann, and S. Auer (2019) Old is gold: linguistic driven approach for entity and relation linking of short text. In NAACL-HLT, pp. 2336–2346. Cited by: §6.1, §6.1.
  • [25] K. Singh, I. O. Mulang, I. Lytra, M. Y. Jaradeh, A. Sakor, M. Vidal, C. Lange, and S. Auer (2017) Capturing knowledge in semantically-typed relational patterns to enhance relation linking. In Proceedings of the Knowledge Capture Conference, pp. 31. Cited by: §4.1, 2nd item, §6.1, §6.1.
  • [26] F. M. Suchanek, G. Kasneci, and G. Weikum (2007) Yago: a core of semantic knowledge. In WWW, Cited by: §1.
  • [27] Q. Wang, J. Liu, Y. Luo, B. Wang, and C. Lin (2016) Knowledge base completion via coupled path ranking. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Vol. 1, pp. 1308–1318. Cited by: §6.2.
  • [28] S. Wang, Y. Zhang, W. Che, and T. Liu (2018) Joint extraction of entities and relations based on a novel graph scheme.. In IJCAI, pp. 4461–4467. Cited by: §6.1.
  • [29] P. Wu, Q. Zhou, Z. Lei, W. Qiu, and X. Li (2018-07) Template oriented text summarization via knowledge graph. In ICALIP, pp. 79–83. Cited by: §1.
  • [30] H. Xiao, M. Huang, and X. Zhu (2015) From one point to a manifold: knowledge graph embedding for precise link prediction. arXiv preprint arXiv:1512.04792. Cited by: §6.2.
  • [31] Z. Yang, D. Yang, C. Dyer, X. He, A. J. Smola, and E. H. Hovy (2016) Hierarchical attention networks for document classification. In NAACL, Cited by: §3.3, §3.3.
  • [32] W. Yih, M. Chang, X. He, and J. Gao (2015) Semantic parsing via staged query graph generation: question answering with knowledge base. In ACL, Cited by: §1.
  • [33] J. Z Pan, M. Zhang, K. Singh, F. Van Harmelen, J. Gu, and Z. Zhang (2019-07) Entity enabled relation linking. Cited by: §1, §6.1, §6.1.