Correcting Knowledge Base Assertions

01/19/2020 ∙ by Jiaoyan Chen, et al. ∙ City, University of London University of Oxford Tencent Norsk institutt for vannforskning 3

The usefulness and usability of knowledge bases (KBs) is often limited by quality issues. One common issue is the presence of erroneous assertions, often caused by lexical or semantic confusion. We study the problem of correcting such assertions, and present a general correction framework which combines lexical matching, semantic embedding, soft constraint mining and semantic consistency checking. The framework is evaluated using DBpedia and an enterprise medical KB.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Knowledge bases (KBs) such as Wikidata (vrandevcic2014wikidata) and DBpedia (auer2007dbpedia)

are playing an increasingly important role in applications such as search engines, question answering, common sense reasoning, data integration and machine learning. However, they still suffer from various quality issues, including constraint violations and erroneous assertions

(farber2018linked; paulheim2017knowledge)

, that negatively impact their usefulness and usability. These may be due to the knowledge itself (e.g., the core knowledge source of DBpedia, Wikipedia, is estimated to have an error rate of

(weaver2006quantifying)), or may be introduced by the knowledge extraction process.

Existing work on KB quality issues covers not only error detection and assessment, but also quality improvement via completion, canonicalizaiton and so on (paulheim2017knowledge). Regarding error detection, erroneous assertions can be detected by various methods, including consistency checking with defined, mined or external constraints (topper2012dbpedia; paulheim2015serving; knublauch2017shapes), prediction by machine learning or statistics methods (melo2017detection; paulheim2014improving; debattista2016preliminary), and evaluation by query templates (kontokostas2014test); see Section 2.1 for a more details. However, the detected erroneous assertions are often eliminated (ngomo2014unsupervised; de2013not), and few robust methods have been developed to correct them.

Lertvittayakumjorn et al. (lertvittayakumjorn2017correcting) and Melo et al. (melo2017approach) found that most erroneous assertions are due to confusion or lexical similarity leading to entity misuse; for example confusion between Manchester_United and Manchester_City, two football clubs based in Manchester, UK, can lead to facts about Manchester_United being incorrectly asserted about Manchester_City. Such errors are common in not only those general KBs like DBpedia and Wikidata but also those domain KBs like the medical KB used in our evaluation. Both studies proposed to find an entity to replace either the subject or the object of an erroneous assertion; however, subject replacement used a simple graph metric and keyword matching, which fails to capture the contextual semantics of the assertion, while object replacement relies on Wikipedia disambiguation pages, which may be inaccessible or non-existent, and again fail to capture contextual semantics.

Other work has focused on quality improvement, for example by canonicalizing assertions whose objects are literals that represent entities (i.e., entity mentions); for example, the literal object in the assertion Yangtze_River, passesArea, “three gorges district”. Replacing this literal with the entity Three_Gorges_Reservoir_Region enriches the semantics of the assertion, which can improve query answering. Such literal assertions are pervasive in wiki-based KBs such as DBPedia (auer2007dbpedia) and Zhishi.me (niu2011zhishi), and in open KBs extracted from text; they may also be introduced when two KBs are aligned or when a KB evolves. According to the statistics in (gunaratna2016gleaning), DBpedia (ca. 2016) included over 105,000 such assertions using the property dbp:location alone. Current methods can predict the type of the entity represented by the literal (gunaratna2016gleaning), which is useful for creating a new entity, and can sometimes identify candidate entities in the KB (chen2019canonicalizing), but they do not propose a general correction method; see Section 2.2 for a more details.

In this paper, we propose a method for correcting assertions whose objects are either erroneous entities or literals. To this end, we have developed a general framework that exploits related entity estimation, link prediction and constraint-based validation, as shown in Figure 1. Given a set of target assertions (i.e., assertions that have been identified as erroneous), it uses semantic relatedness to identify candidate entities for substitution, extracts a multi-relational graph from the KB (sub-graph) that can model the context of the target assertions, and learns a link prediction model using both semantic embeddings and observed features. The model predicts the assertion likelihood for each candidate substitution, and filters out those that lead to unlikely assertions. The framework further verifies the candidate substitutions by checking their consistency w.r.t. property range and cardinality constraints mined from the global KB. The framework finally makes a correction decision, returning a corrected assertion or reporting failure if no likely correction can be identified.

Briefly this paper makes the following main contributions:

  • It proposes a general framework that can correct both erroneous entity assertions and literal assertions;

  • It utilizes both semantic embeddings and observed features to capture the local context used for correction prediction, with a sub-graph extracted for higher efficiency;

  • It complements the prediction with consistency checking against “soft” property constraints mined from the global KB;

  • It evaluates the framework with erroneous entity assertions from a medical KB and literal assertions from DBpedia.

2. Related Work

We survey related work in three areas: assertion validation which includes erroneous assertion detection and link prediction with semantic embeddings and observed features, canonicalization, and assertion correction.

2.1. Assertion Validation

In investigating KB quality, the validity of assertions is clearly an important consideration. One way to identify likely invalid assertions is to check consistency against logical constraints or rules. Explicitly stated KB constraints can be directly used, but these are often weak or even non-existent. Thus, before using the DBpedia ontology to validate assertions, Topper et al. (topper2012dbpedia) enriched it with class disjointness, and property domain and range costraints, all derived via statistical analysis. Similarly, Paulheim and Gangemi (paulheim2015serving) enriched the ontology via alignment with the DOLCE-Zero foundational ontology. Various constraint and rule languages, including the Shapes Constraint Language (SHACL) (knublauch2017shapes), Rule-Based Web Logics (arndt2017using) and SPARQL query templates (kontokostas2014test), have also been proposed so that external knowledge can be encoded and applied in reasoning for assertion validation.

With the development of machine learning, various feature extraction and semantic embedding methods have been proposed to encode the semantics of entities and relations into vectors

(wang2017knowledge). The observed features are typically indicators (e.g., paths) extracted for a specific prediction problem. They often work together with other learning and prediction algorithms, including supervised classification (e.g., PaTyBRED (melo2017approach)

), autoencoder (e.g., RDF2Vec

(ristoski2016rdf2vec)), statistical distribution estimation (e.g., SDValidate (paulheim2014improving)) and so on. PaTyBRED and SDValidate directly detect erroneous assertions, while RDF2Vec utilizes graph paths to learn intermediate entity representations that can be further used to validate assertions via supervised classification.

In contrast to observed features, which often rely on ad-hoc feature engineering, semantic embeddings (vectors) can be learned by minimizing an overall loss with a score function for modeling the assertion’s likelihood. They can be directly used to estimate the assertion likelihood with the score function. State-of-the-art methods and implementations include DistMult (yang2015embedding), TransE (bordes2013translating)

, Neural Tensor Network

(socher2013reasoning), IterE (zhang2019iteratively), OpenKE (han2018openke)

and so on. They can also be combined with algorithms such as outlier detection

(debattista2016preliminary) and supervised classification (myklebust2019knowledge) to deal with assertion validation in specific contexts.

On the one hand, the aforementioned methods were mostly developed for KB completion and erroneous assertion detection, and few have been applied in assertion correction, especially the semantic embedding methods. On the other hand, they suffer from various shortcomings that limit their application. Consistency checking depends on domain knowledge of a specific task for constraint and rule definition, while the mined constraints and rules are often weak in modeling local context for disambiguation. Semantic embedding methods are good at modeling contextual semantics in a vector space, but are computationally expensive when learning from large KBs (omran2018scalable) and suffer from low robustness when dealing with real world KBs that are often noisy and sparse (pujara2017sparsity).

2.2. Canonicalization

Recent work on KB canonicalization is relevant to related entity estimation in our setting. Some of this work focuses on the clustering and disambiguation of entity mentions in an open KB extracted from textual data (galarraga2014canonicalizing; wu2018towards; vashishth2018cesi); CESI (vashishth2018cesi), for example, utilizes side information (e.g., WordNet), semantic embedding and clustering to identify equivalent entity mentions. However, these methods cannot be directly applied in our correction framework as they focus on equality while we aim at estimating relatedness, especially for assertions with erroneous objects. The contexts are also different as, unlike entity mentions, literals have no neighbourhood information (e.g., relationships with other entities) that can be utilized.

Chen et al. (chen2019canonicalizing) and Gunaratna et al. (gunaratna2016gleaning) aimed at the canonicalization of literal objects used in assertions with DBpedia object properties (whose objects should be entities). Instead of correcting the literal with an existing entity, they focus on the typing of the entity that the literal represents, which is helpful when a new entity needs to be created for replacement. Although the work in (chen2019canonicalizing) also tried to identify an existing entity to substitute the literal, their approach suffers from a number of limitations: the predicted type is used as a constraint for filtering, which is not a robust and general correction method; the related entity estimation is ad-hoc and DBpedia specific; and the type prediction itself only uses entity and property labels, without any other contextual semantics.

2.3. Assertion Correction

We focus on recent studies concerning the automatic correction of erroneous assertions. Some are KB specific. For example, Dimou et al. (dimou2015assessing) refined the mappings between Wikipedia data and DBpedia knowledge such that some errors can be corrected during DBpedia construction, while Pellissier et al. (pellissier2019learning) mined correction rules from the edit history of Wikidata to create a model that can resolve constraint violations. In contrast, our framework is general and does not assume any additional KB meta information or external data.

Regarding more general approaches, some aim at eliminating constraint violations. For example, Chortis et al. (chortis2015diagnosis; tonon2015fixing) defined and added new properties to avoid violating integrity constraints, while Melo (de2013not) removed sameAs links that lead to such violations. These methods ensure KB consistency, but they can neither correct the knowledge itself nor deal with those wrong assertions that satisfy the constraints. Lertvittayakumjorn et al. (lertvittayakumjorn2017correcting) and Melo et al. (melo2017approach) both aimed at correcting assertions by replacing the objects or subjects with correct entities. The former found the substitute by either keyword matching or a simple graph structure metric (i.e., the number of commonly connected objects of two entities), while the latter first retrieved candidate substitutes from the Wikipedia disambiguation page (which may not exist, especially for KBs that are not based on Wikipedia) and then ranked them by a lexical similarity score. Both methods, however, only use simple graph structure or lexical similarity to identify the substitute, and ignore the linkage incompleteness of a KB. In contrast, our framework utilizes state-of-the-art semantic embedding to model and exploit the local context within a sub-graph to predict assertion likelihood, and at the same time uses global property constraints to validate the substitution.

3. Background

3.1. Knowledge Base

In this study we consider a knowledge base (KB) that follows Semantic Web standards including RDF (Resource Description Framework), RDF Schema, OWL (Web Ontology Language)111There is a revision of the Web Ontology Language called OWL 2, for simplicity we also refer to this revision as OWL. and the SPARQL Query Language (domingue2011handbook). A KB is assumed to be composed of a TBox (terminology) and an ABox (assertions). The TBox usually defines classes (concepts), a class hierarchy (via rdfs:subClassOf), properties (roles), and property domains and ranges. It may also use a more expressive language such as OWL to express constraints such as class disjointness, property cardinality, property functionality and so on (owl2).

The ABox consists of a set of assertions (facts) describing concrete entities (individuals), each of which is represented by an Uniform Resource Identifier. Each assertion is represented by an RDF triple , where is an entity, is a property and is either an entity or a literal (i.e., a typed or untyped data value such as a string or integer). , and are known as the subject, predicate and object of the triple. An entity can be an instance of one or more classes, which is specified via triples using the rdf:type property. Sometimes we will use class assertion to refer to this latter kind of assertion and property assertion to refer to assertions where is not a property from the reserved vocabulary or RDF, RDFS or OWL.

Such a KB can be accessed by SPARQL queries using a query engine that supports the relevant entailment regime (e.g., RDFS or OWL) (glimm2012sparql); such an engine can, e.g., infer , given and

. In addition, large-scale KBs (aka knowledge graphs) often have a lookup service that enables users to directly access its entities by fuzzy matching; this is usually based on a lexical index that is built with entity labels (phrases defined by

rdfs:label) and sometimes entity anchor text (short descriptions). DBpedia builds its lookup service222https://wiki.dbpedia.org/lookup using the lexical index of Spotlight (mendes2011dbpedia), while entities of Wikidata can be retrieved, for example, via the API developed for OpenRefine and OpenTapioc (delpeuch2019opentapioca).

3.2. Problem Statement

In this study, we focus on correcting ABox property assertions where is a literal (literal assertion) or an entity (entity assertion). Note that in the former case correction may require more than simple canonicalization; e.g., the property assertion Sergio_Agüero, playsFor, “Manchester United” should be corrected to Sergio_Agüero, playsFor, Manchester_City.

Literal assertions can be identified by data type inference and regular expressions as in (gunaratna2016gleaning), while erroneous entity assertions can be detected either manually when the KB is applied in downstream applications or automatically by the methods discussed in Section 2.1. It is important to note that if the KB is an OWL ontology, the set of object properties (which connect two entities) and data properties (which connect an entity to a literal) should be disjoint. In practice, however, KBs such as DBpedia often do not respect this constraint.

We assume that the input is a KB , and a set of literal and/or entity assertions that have been identified as incorrect. For each assertion in , the proposed correction framework aims at either finding an entity from as an object substitute, such that is semantically related to and the new triple is true, or reporting that there is no such an entity in .

4. Methodology

4.1. Framework

As shown in Figure 1, the main components of our assertion correction framework consist of related entity estimation, link prediction, constraint-based validation and correction decision making. Related entity estimation identifies those entities that are relevant to the object of the assertion. Given a target assertion , its related entities, ranked by the relatedness, are denoted as . They are called candidate substitutes of the original object , and the new assertions when is replaced are called candidate assertions. We adopt two techniques — lexical matching and word embedding — to measure relatedness and estimate . Note that the aim of this step is to ensure high recall; precision is subsequently taken care of via link prediction and constraint-based validation over the candidate assertions.

Link prediction estimates the likelihood of each candidate assertion. For each entity in , it considers a target assertion and outputs a score that measures the likelihood of . To train such a link prediction model, a sub-graph that contains the context of the correction task (i.e., ) is first extracted, with the related entities, involved properties and their neighbourhoods; positive and negative assertions are then sampled for training. State-of-the-art semantic embeddings (TransE (bordes2013translating) and DistMult (yang2015embedding)), as well as some widely used observed features (path and node) are used to build the link prediction model.

Constraint-based validation checks whether a candidate assertion violates constraints on the cardinality or (hierarchical) range of the property, and outputs a consistency score which measures its degree of consistency against such constraints. Such constraints can be effective in filtering out unlikely assertions, but modern KBs such as DBpedia and Wikidata often include only incomplete or weak constraints, or do not respect the given constraints as no global consistency checking is performed. Therefore, we do not assume that there are any property cardinality or range constraints in the KB TBox,333Any property range and cardinality constraints that are defined in the TBox, or that come from external knowledge, can be easily and directly injected into the framework.

but instead use mined constraints, each of which is associated with a supporting degree (probability).

Correction decision making combines the results of related entity estimation, link prediction and constraint-based validation; it first integrates the assertion likelihood scores and consistency scores, and then filters out those candidate substitutes that have low scores. Finally, it either reports that no suitable correction was found, or recommends the most likely correction.

Figure 1. The Overall Framework for Assertion Correction

4.2. Related Entity Estimation

For each target assertion in , related entity estimation directly adopts as the input if is a literal, or extracts the label of if is an entity. It returns a list containing up to most related entities; i.e., . Our framework supports both a lexical matching based approach and a word embedding based approach; this allows us to compare the effectiveness of the two approaches on different KBs (see Section 5.2).

For those KBs with a lexical index, the lexical matching based approach can directly use a lookup service based on the index, which often returns a set of related entities for a given phrase. Direct lookup with the original phrase, however, often misses the correct entity, as the input phrase, either coming from the erroneous entity or the literal, is frequently noisy and ambiguous. For example, the DBpedia Lookup service returns no entities with the input “three gorges district” which refers to the entity dbr:Three_Gorges_Reservoir_Region. To improve recall, we retrieve a list of up to entities by repeating entity lookup using sub-phrases, starting with the longest sub-phrases and continuing with shorter and shorter sub-phrases until either entities have been retrieved or all sub-phrases have been used. The list of each lookup is ordered according to the relatedness (lexical similarity) to the original phrase, while all the lists are concatenated according to the above lookup order. To extract the sub-phrases, we first tokenize the original phrase, remove the stop words and then concatenate the tokens in their original order for sub-phrases of different lengths. For those KBs without an existing lexical index, the lexical matching based approach adopts a string similarity score named Edit Distance (navarro2001guided) in order to calculate the relatedness of an entity with its label.

The word embedding based approach calculates the similarity of against each entity in the KB, using vector representations of their labels (literals). It (i) tokenizes the phrase and removes the stop words, (ii) represents each token by a vector with a word embedding model (e.g., Word2Vec (mikolov2013efficient)) that is trained using a large corpus, where tokens that are out of the model’s vocabulary are ignored, (iii) calculates the average of the vectors of all the tokens, which is a widely adopted strategy to embed a phrase, and (iv)

computes the distance-based similarity score of the two vectors by e.g., the cosine similarity.

Compared with lexical matching, word embedding considers the semantics of a word, which assigns a high similarity score to two synonyms. In the above lookup example, “district” becomes noise as it is not included in the label of dbp:Three_Gorges_Reservoir_Region, but can still play an important role in the word embedding based approach due to the short word vector distance between “district” and “region”. However, in practice entity misuse is often not caused by semantic confusion, but by similarity of spelling and token composition, where the lexical similarity is high but the semantics might be quite different. Moreover, lexical matching with a lexical index makes it easy to utilize multiple items of textual information, such as labels in multiple languages and anchor text, where different names of an entity are often included.

4.3. Link Prediction

Given related entities of a target assertion , link prediction is used to estimate a likelihood score for the candidate assertion , for each entity in . For efficiency in dealing with very large KBs, we first extract a multi-relational sub-graph for the context of the task, and then train the link prediction model with a sampling method as well as with different observed features and semantic embeddings.

4.3.1. Sub-graph

Given a KB and a set of target assertions , the sub-graph corresponding to is a part of , denoted as , where denotes entities, denotes object properties (relations) and denotes assertions (triples). As shown in Algorithm 1, the sub-graph is calculated with three steps: (i) extract the seeds — entities and properties involved in the target assertions , as well as related entities of each assertion in ; (ii) extract the neighbourhoods — directly associated assertions of each of the seed properties and entities; (iii) re-calculate the properties and entities involved in the assertions. Note that means an assertion is either directly declared in the KB or can be inferred by the KB. The statements with can be implemented by SPARQL: Line 1 needs queries each of which retrieves the associated assertions of a given property, while Line 1 needs queries each of which retrieves the associated assertions of a given subject or object.

1 Input: (i) The whole KB: , (ii) The set of target assertions: , (iii) The related entities of each target assertion: ,
2 Result: The sub-graph:
3 begin
4        Step 1: Extract the seeds
5        extract subject entities
6        extract target properties
7        The union of related entities
8       
9        foreach  in  do
10               if  is an entity  then
11                      add object entity
12              
13        Step 2: Extract the neighbourhoods
14       
15       
16        Step 3: Re-calculate entities and properties
17       
18       
19        return
20
Algorithm 1 Sub-graph Extraction (, , )

4.3.2. Sampling

Positive and negative samples (assertions) are extracted from the sub-graph . The positive samples are composed of two parts: , where refers to assertions whose subjects and properties are among (i.e., those subject entities involved in ) and respectively (), while refers to those assertions whose objects and properties are among (i.e., those related entities involved in ) and respectively (). and are calculated by two steps: (i) extract all the associated assertions of each property in from ; (ii) group these assertions according to and . Compared with an arbitrary assertion in , the above samples are more relevant to the candidate assertions for prediction. This can help release the domain adaption problem — the data distribution gap between the training and predicting assertions.

The negative samples are composed of two parts as well: , where is constructed according to by replacing the object with a random entity in , while are constructed according to by replacing the subject with a random entity in . Take as an example, for each of its assertion , an entity is randomly selected from for a synthetic assertion such that , where represents that an assertion is neither declared by the KB nor can be inferred, and is added to . In implementation, we can get if , as is extracted from the KB with inference. is constructed similarly. Note that the size of and is balanced.

4.3.3. Observed Features

We extract two kinds of observed features — the path feature and the node feature. The former represents potential relations that can connect the subject and object, while the latter represents the likelihood of the subject being the head of the property, and the likelihood of the object being the tail of the property. For the path feature, we limit the path depth to two, for reducing computation time and feature size, both of which are exponential w.r.t. the depth. In fact, it has been shown that paths of depth one are already quite effective. They outperform the state-of-the-art KB embedding methods like DistMult, TransE and TransH, together with the node feature on some benchmarks (toutanova2015observed). Meanwhile the predictive information of a path will vanish as its depth increases.

In calculation, we first extract paths of depth one: and , where represents properties from to (i.e., ), while represents properties from to (i.e., ). Next we calculate paths of depth two (ordered property pairs) in two directions as well: , . Finally we merge these paths: , and encode them into a multi-hot vector as the path feature, denoted as . Briefly we collect all the unique paths from the training assertions as a candidate set, where one path corresponds to one slot in encoding. When an assertion is encoded into a vector, a slot of the vector is set to if the slot’s corresponding path is among the assertion’s paths and otherwise.

The node feature includes two binary variables:

, where denotes the likelihood of the subject while denotes the likelihood of the object. Namely, if there exists some entity such that and otherwise. if there exists some entity such that and otherwise.

Finally we calculate and , and concatenate them for each sample in

, and train a link prediction model with a basic supervised classifier named Multiple Layer Perception (MLP):

(1)

We also adopt the path-based latent feature learned by the state-of-the-art algorithm RDF2Vec (ristoski2016rdf2vec), as a baseline. RDF2Vec first extracts potential outstretched paths of an entity by e.g., graph walks, and then learns embeddings of the entities through the neural language model Word2Vec. In training, we encode the subject and object of an assertion by their RDF2Vec embeddings, encode its property by a one-hot vector, concatenate the three vectors, and use the same classifier MLP. The trained model is denoted as .

4.3.4. Semantic Embeddings

A number of semantic embedding algorithms have been proposed to learn the vector representation of KB properties and entities. One common way is to define a scoring function to model the truth of an assertion, and use an overall loss for learning. We adopt two state-of-the-art algorithms — TransE (bordes2013translating) and DistMult (yang2015embedding). Both are simple but have been shown to be competitive or outperform more complex alternatives (yang2015embedding; bordes2013translating; kadlec2017knowledge). For high efficiency, we learn the embeddings from the sub-graph.

TransE tries to learn a vector representation space such that is a nearest neighbor of if an assertion holds, and is far away from otherwise. denotes the vector add operation. To this end, the score function of , denoted as , is defined as , where is a dissimilarity (distance) measure such as norm, while , and are embeddings of , and

respectively. The embeddings have the same dimension that is configured, and are initialized by one-hot encoding. In learning, a batch stochastic gradient descent algorithm is used to minimize the the following margin-based ranking loss:

(2)

where is a hyper parameter, denotes extracting the positive part, and represents a negative assertion of , generated by randomly replacing the subject or object with an entity in .

DistMult is a special form of the bilinear model where the non-diagonal entries in the relation matrices are assumed to be zero. The score function of an assertion is defined as , where denotes the operation of pairwise multiplication. As TransE, the embeddings are initialized by one-hot encoding, with the a dimension configured. A similar margin-based ranking loss as (2) is used for training with batched stochastic gradient descent.

In prediction, the likelihood score of an assertion can be calculated with the corresponding scoring function and the embeddings of its subject, property and object. We denote the link prediction model by TransE and DistMult as and respectively.

4.4. Constraint-based Validation

We first mine two kinds of soft constraints — property cardinality and hierarchical property range from the KB, and then use a consistency checking algorithm to validate those candidate assertions.

4.4.1. Property Cardinality

Given a property

, its soft cardinality is represented by a probability distribution

, where is an integer that denotes the cardinality. It is calculated as follows: (i) get all the property assertions whose property is , denoted as , and all the involved subjects, denoted as , (ii) count the number of the object entities associated with each subject in and : , (iii) find out the maximum object number: , and (iv) calculate the property cardinality distribution as:

(3)

where denotes the size of a set. Specially if is empty. is short for , denoting the probability that the cardinality is larger than . In implementation, can be accessed by one time SPARQL query, while the remaining computation has linear time complexity w.r.t. .

The probability of cardinality is equal to the ratio of the subjects that are associated with different entity objects. For example, considering a property hasParent that is associated with different subjects (persons) in the KB, if one of them has one object (parent) and the remaining have two objects (parents), then the cardinality distribution is: and . Note that although such constraints follow Closed Word Assumption and Unique Name Assumption, they behave well in our method. On the one hand, probabilities are estimated to represent the supporting degree of a constraint by the ABox. One the other hand, they are used in an approximate model to validate candidate assertions instead of as new and totally true knowledge for KB TBox extension.

4.4.2. Hierarchical Property Range

Given a property , its range constraint consists of (i) specific range which includes the most specific classes of its associated objects, denoted as , and (ii) general range which includes ancestors of these most specific classes, denoted as , with top classes such as owl:Thing being excluded. A most specific class of an entity refers to one of the most fine grained classes that the entity is an instance of according to class assertions in the KB. Note that there could be multiple such classes as the entity could be asserted to be an instance of multiple classes for which there is no sub-class relationship. General classes of an entity are those that subsume one or more of the specific classes as specified in the KB via rdfs:subClassOf assertions.

Each range class in ( resp.) has a probability in that represents its supporting degree by the KB, denoted as ( resp.). , and the supporting degrees are calculated by the following steps: (i) get all the object entities that are associated with , denoted as ; (ii) infer the specific and general classes of each entity in , denoted as and respectively, and at the same time collect as and as ; (iii) compute the supporting degrees:

(4)

The degree of each range class is the ratio of the objects that are instances of the class, as either directly declared in the ABox or inferred by rdfs:subClassOf. The implementation needs one time SPARQL query to get , and times SPARQL queries to get the specific and ancestor classes. The remaining computation has linear time complexity w.r.t. . As property cardinality, the range cardinality is also used for approximating the likelihood of candidate assertions, using a consistency checking algorithm introduced bellow.

4.4.3. Consistency Checking

As shown in Algorithm 2, constraint checking acts as a model, to estimate the consistency of an assertion against soft constraints of hierarchical property range and cardinality. Given a candidate assertion , the algorithm first checks the property cardinality, with a parameter named maximum cardinality exceeding rate . Line 2 counts the number of entity objects that are associated with and in the KB, assuming that the correction is made (i.e., has been added into the KB). Note that . Line 2 calculates its exceeding rate w.r.t. , where . In Line 2, indicates that is highly likely to used as a data property in the KB. This is common in correcting literal assertions: one example is the property hasName whose objects are phrases of entity mentions but should not be replaced by entities. In this case, it is more reasonable to report that the object substitute does not exist, and thus the algorithm sets the cardinality score to .

Another condition of setting to is . Specially, when is set to , (i.e., , ) means that is a object property with functionality in the KB but the correction violates this constraint. Note that can exceed by a small degree which happens when is large. For example, when is set to , (i.e., and ) is allowed. Line 2 to 2 calculate the property cardinality score as the probability of being a functional property (), or as the probability of being a none-functional property (). Specially, we punish the score when (i.e., ) by multiplying it with a degrading factor : the higher exceeding rate, the more it degrades.

Line 2 to 2 calculate the property range score , by combing the specific range score and the general range score with their importance weights and . Usually we make the specific range more important by setting and to e.g., and respectively. Line 2 computes and : the score is higher if more classes of the objects are among the range classes, and these classes have higher range degrees. For example, considering the property bornIn with the following range cardinality: , , , , and , we will have (i) and if , (ii) and if , and (iii) and if . The order of the consistency degree against the property range is: .

The algorithm finally returns the property cardinality score and the property range score . The former model is denoted as while the letter is denoted as . According to some empirical analysis, we can multiply or average the two scores, as the final model of consistency checking, denoted as .

1 Input: (i) A candidate assertion: , (ii) property cardinality constraint: , (iii) the maximum cardinality exceeding rate: , (iv) hierarchical property range constraint: , (v) weights of the specific range and general range:
2 Result: : score that is consistent with the property cardinality; : score that is consistent with the property range
3 begin
4        count the number of object entities
5        ;
6        ; calculate the exceeding rate
7        no object entities are associated with in the KB, or the cardinality exceeds the maximum by a specific rate
8        if  then
9               ;
10              
11       else
12               if  then
13                      probability as a functional property
14                      ;
15                     
16              else
17                      probability as a none-functional property
18                     
19                     
20              
21       ; get the object’s classes
22        calculate the constraint score of specific and general ranges
23       
24        calculate the overall range constraint score
25        ;
26        return ,
27
Algorithm 2 Consistency Checking (, )

4.5. Correction Decision Making

Given a target assertion in , and its top- related entities , for each entity in , the correction framework (i) calculates the assertion likelihood score with a link prediction model (, or ), and the consistency score with , or ; (ii) separately normalizes and into according to all the predictions by the corresponding model for ; (iii) ensembles the two scores by simple averaging: ; (iv) filters out from if . Note is always kept if is a literal assertion and its literal is exactly equal to the label of . The related entities after filtering keep their original order in , and are denoted as . is a parameter in that needs to be adjusted with a developing data set. It eventually returns none, which means there is no entity in the KB that can replace the object of , if is empty, and the top- entity in as the object substitute otherwise. The ensemble of the link prediction score and constraint-based validation score is not a must. Either of them can make a positive impact independently, while their ensemble can make the performance higher in most cases, as evaluated in Section 5.4.

5. Evaluation

5.1. Experiment Settings

5.1.1. Data

In our experiment, we correct assertions in DBpedia (auer2007dbpedia) and in an enterprise medical KB whose TBox is defined by clinic experts and ABox is extracted from medical articles (text) by some open information extraction tools (cf. more details in (niklaus2018survey)). DBpedia is accessed via its official Lookup service, SPARQL Endpoint444http://dbpedia.org/sparql and entity label dump (for related entity estimation with Word2Vec). The medical KB contains knowledge about disease, medicine, treatment, symptoms, foods and so on, with around thousand entities, properties, classes, million property assertions. The data are representative of two common situations: errors of DBpedia are mostly inherited from the source while errors of the medical KB are mostly introduced by extraction.

Regarding DBpedia, we reuse the real world literal set proposed by (chen2019canonicalizing; gunaratna2016gleaning). As our task is not typing the literal, but substituting it with an entity, literals containing multiple entity mentions are removed, while properties with insufficient literal objects are complemented with more literals from DBpedia. We annotate each assertion with a ground truth (GT), which is either a correct replacement entity from DBpedia (i.e., Entity GT) or none (i.e., Empty GT). Ground truths are carefully checked using DBpedia, Wikipedia, and multiple external resources. Regarding the medical KB, we use a set of assertions with erroneous entity objects that have been discovered and collected during deployment of the KB in enterprise products; the GT annotations have been added with the help of clinical experts. For convenience, we call the above two target assertion sets DBP-Lit and MED-Ent respectively.555DBP-Lit data and its experiment codes: https://github.com/ChenJiaoyan/KG_Curation More details are shown in Table 1.

Assertions (with Entity GT) # Properties # Subjects #
DBP-Lit ()
MED-Ent ()
Table 1. Some statistics of DBP-Lit and MED-Ent.

5.1.2. Settings

In the evaluation, we first analyze related entity estimation (Section 5.2) and link prediction (Section 5.3) independently. For related entity estimation, we report the recall of Entity GTs of different methods with varying top- values, based on which a suitable method and a value are selected for the framework. For link prediction, we compare the performance of different semantic embeddings and observed features, using those target assertions whose Entity GTs are recalled in related entity estimation. The related entities of a target assertion are first ranked according to the predicted score, and then standard metrics including Hits@, Hits@ and MRR (Mean Reciprocal Rank)666https://en.wikipedia.org/wiki/Mean_reciprocal_rank are calculated.

Next we evaluate the overall results of the assertion correction framework (Section 5.4), where the baselines are compared with, and the impact of link prediction and constraint-based validation is analyzed. Three metrics are adopted: (i) Correction Rate which is the ratio of the target assertions that are corrected with right substitutes, among all the target assertions with Entity GTs; (ii) Empty Rate which is the ratio of the target assertions that are corrected with none, among all the target assertions with Empty GTs; (iii) Accuracy which is the ratio of the truly corrected target assertions by either substitutes or none, among all the target assertions. Note that accuracy is an overall metric considering both correction rate and empty rate. Either high (low resp.) correction rate or empty rate can lead to high (low resp.) accuracy. With the overall results, we finally analyze the constraint-based validation with more details.

The reported results are based on the following setting (unless otherwise specified). In related entity estimation, Word2Vec (mikolov2013efficient) trained using the Wikipedia article dump in June 2018 is used for word embedding. In link prediction, the margin hyper parameter is set by linear increasing w.r.t. the training step, the embedding size of both entities and properties is set to

, and the other training hyper parameters such as the number of epochs and MLP hidden layer size are set such that the highest MRR is achieved on an evaluation sample set. Regarding the baseline RDF2Vec, pre-trained versions of DBpedia entities with different settings by Mannheim University

777https://bit.ly/2M4TQOg are tested, and the results with the best MRR are reported. In constraint-based validation, , and are set to , and respectively, according to the algorithm insight. Some other reasonable settings explored can achieve similar results. The embeddings are trained by GeForce GTX 1080 Ti with OpenKE (han2018openke), while the remaining is computed by Intel(R) Xeon(R) CPU E5-2670 @2.60GHz and 32G RAM.

5.2. Related Entity Estimation

We compare different methods and settings used in related entity estimation with the results presented in Figure 2, where the recall of Entity GTs by top- related entities are presented. First, we find that the lexical matching based methods (Lookup, and Edit Distance) have much higher recall than Word2Vec, on both DBP-Lit and MED-Ent. The reason for DBP-Lit may lie in the Lookup service provided by DBpedia, which takes not only the entity label but also the anchor text into consideration. The latter provides more semantics, some of which, such as different names and background description, is very helpful for recalling the right entity. The reason for MED-Ent, according to some empirical analysis, is that the erroneous objects are often caused by lexical confusion, such as misspelling and misusing of an entity with similar tokens. Second, our Lookup solution with sub-phrases, i.e., , as expected, outperforms the original Lookup. For example, when both curves are stable, their recalls are around and respectively,

The target of related entity estimation in our framework is to have a high recall with a value that is not too large (so as to avoid additional noise and limit the size of the sub-graph for efficiency). In real application, the method and value can be set by analyzing the recall. According to the absolute value and the trend of the recall curves in Figure 2, our framework uses with for DBP-Lit, and Edit Distance with for MED-Ent.

Figure 2. The recall of Entity GTs by top- related entities

5.3. Link Prediction

5.3.1. Impact of Models

The results of different link prediction methods are shown in Table 2

, where the sub-graph is used for training. The baseline Random means randomly ranking the related entities, while AttBiRNN refers to the attentive bidirectional Recurrent Neural Networks that utilize the labels of the subject, property and object. AttBiRNN was used in

(chen2019canonicalizing) for literal object typing, with good performance achieved. First of all, the results verify that either latent semantic embeddings (TransE and DistMult) or observed features with Multiple Layer Perception are effective for both DBP-Lit and MED-Ent: MRR, Hits@1 and Hits@5 are all dramatically improved in comparison with Random and AttBiRNN.

We also find that concatenating the node feature and path feature (Node+Path) achieves higher performance than the node feature and the path feature alone, as well as the baseline RDF2Vec which is based on graph walks. For DBP-Lit, the outperformance over RDF2Vec is , and for MRR, Hits@1 and Hits@5 respectively.

Meanwhile, Node+Path performs better than TransE and DistMult for DBP-Lit, while for MED-Ent, TransE and DistMult outperforms Node+Path. For example, considering the metric of MRR, Node+Path is higher than DistMult for DBP-Lit, but DistMul is higher than Node+Path for MED-Ent. One potential reason is the difference in the number of properties and sparsity of the two sub-graphs. DBP-Lit has properties in its target assertions and properties in its sub-graph; while MED-Ent has properties in its target assertions and properties in its sub-graph. The small number of properties for MED-Ent leads to quite poor path feature, which is verified by its independent performance (e.g., the MRR is only ). In the sub-graph of DBP-Lit, the average number of connected entities per property (i.e., density) is , while in the sub-graph of MED-Ent, it is . Moreover, a larger ratio of properties to entities also leads to richer path features. According to these results, we use Node+Path for DBP-Lit and DistMult for MED-Ent in our correction framework.

Methods DBP-Lit MED-Ent
MRR Hits@1 Hits@5 MRR Hits@1 Hits@5
Random
AttBiRNN
TransE
DistMult
RDF2Vec
Node
Path
Node+Path
Table 2. Link prediction results based on the sub-graph.

5.3.2. Impact of The Sub-graph

We further analyze the impact of sub-graph usage in training the link prediction model. The results of some of the methods that can be run over the whole KB with limited time are shown in Table 3, where Node+Path (DBP-Lit) uses features extracted from the whole KB but samples from the sub-graph. One the one hand, in comparison with Node+Path trained purely with the sub-graph, Node+Path with global features actually performs worse. As all the directly connected properties and entities of each subject entity, related entity and target property are included in the sub-graph, using the sub-graph makes no difference for node features and path features of depth one. Thus the above result is mainly due to the fact that path features of depth two actually makes limited contribution in this link prediction context. This is reasonable as they are weak, duplicated or even noisy in comparison with node features and path features of depth one. One the other hand, learning the semantic embeddings with the sub-graph has positive impact on TransE and negative impact on DistMult for MED-Ent. However the impact in both cases is quite limited. Considering that the sub-graph has only ( resp.) of the entities (assertions resp.) of the whole medical KB, which reduces the training time of DistMult embeddings from minutes to minutes, the above performance drop can be accepted.

Cases MRR Hits@1 Hits@5
TransE (MED-Ent) (-) (-) (-)
DistMult (MED-Ent) (+) (+) (+)
Node+Path (DBP-Lit) (-) (-) (-)
Table 3. Link prediction results based on the whole KB, and their outperformance (gap) over those based on the sub-graph

5.4. Overall Results

Figure 3 presents the correction rate, empty rate and accuracy of our assertion correction framework with a ranging filtering threshold . Note that lexical matching without any filtering is similar to the existing method discussed in related work (lertvittayakumjorn2017correcting). On the one hand, we find that filtering with either link prediction (LP) or constraint-based validation (CV) can improve the correction rate when is set to a suitable range. This is because those candidate substitutes that are lexically similar to the erroneous object but lead to unlikely assertions are filtered out, while those that are not so lexically similar but lead to true assertions are ranked higher. As the empty rate is definitely increased after filtering (e.g., improved from to by + LP + CV for DBP-Lit), the accuracy for both DBP-Lit and MED-Ent is improved in the whole range of . On the other hand, we find that averaging the scores from link prediction and constraint-based validation is effective. It leads to both higher correction rate and accuracy than either of them for some values of , such as for DBP-Lit and for MED-Ent.

Figure 3. Overall results of the correction framework for DBP-Lite [Above] and MED-Ent [Below]. + LP and + CV represent filtering with link prediction and constraint-based validation respectively, with the filtering threshold ranging from to with a step of .

Table 4 presents the optimum correction rate and accuracy for several settings. Note that they are achieved using a suitable setting; in real applications this can be determined using an evaluation data set. Based on these results, we make the following observations. First, the optimum results are consistent with our above conclusions regarding the positive impact of link prediction, constraint-based validation and their ensemble. For example, the optimum accuracy of DBP-Lit is improved by using constraint-based validation in comparison with the original related entities using lexical matching. The correction rate of MED-Ent provides another example: REE + LP + CV is higher than REE + LP, and higher than REE + CV.

Second, lexical matching using either Lookup (for DBP-Lit) or Edit Distance (for MED-Ent) has a much higher correction rate and accuracy than Word2Vec, while our Lookup with sub-phrases () has even higher correction rate than the original Lookup of DBpedia. These overall results verify the recall analysis on related entity estimation in Section 5.2. Meanwhile, we find that the overall results on observed features () and latent semantic embeddings ( and ) are also consistent with the link prediction analysis in Section 5.3: has a better filtering effect than and for DBP-Lit, but worse filtering effect for MED-Ent.

Methods DBP-Lit MED-Ent
C-Rate Acc C-Rate Acc
Lexical Matching
Word2Vec
REE + LP ()
REE + LP ()
REE + CV ()
REE + CV ()
REE + CV ()
REE + LP + CV
Table 4. Optimum correction rate (C-Rate) and accuracy (Acc). For Lexical Matching, DBP-Lit uses the original DBpedia Lookup while MED-Ent uses Edit Distance. REE denotes Related Entity Estimation: DBP-Lit uses while MED-Ent uses Edit Distance.

5.5. Constraint-based Validation

Besides the positive impact on the overall results mentioned above, we get several more detailed observations about constraint-based validation from Table 4. On the one hand, the property range constraint plays a more important role than the property cardinality constraint, while their combination is more robust than either of them, as expected. Considering the assertion set of MED-Ent, filtering by , for example, leads to higher accuracy than filtering by , while filtering by has higher accuracy and equal correction rate in comparison with .

On the other hand, we find constraint-based validation performs well for DBP-Lit, with higher accuracy and equal correction rate in comparison with link prediction, but performs much worse for MED-Ent. This is mainly due to the gap between the semantics of the two target assertion sets and their corresponding KBs: (i) the mined property ranges of DBP-Lit include hierarchical classes, while those of MED-DB have only classes in total and these classes have no hierarchy; (ii) out of target properties in DBP-Lit have pure functionality (i.e., ) which plays a key role in the consistency checking algorithm, while none of the target properties of MED-Ent has such pure functionality. The second characteristic is also a potential reason why filtering by constraint-based validation with property cardinality only achieves a very limited improvement over Edit Distance for MED-Ent as shown in Table 4.

We additionally present some examples of the mined soft property constraints in Table 5. Most of them are consistent with our common sense understanding of the properties, although some noise is evident (e.g., the range Person and Agent of dbp:homeTown), most likely caused by erroneous property and class assertions.

Property Cardinality Specific Range General Range
dbp:homeTown Location: , City: , Country: , Person: , … PopulatedPlace: , Place: , Settlement: , Agent: , …
dbp:finalteam BaseballTeam: , SportsTeam: , SoccerClub: , … Agent: , Organization: , SportsTeam: , …
Table 5. Soft constraints of two property examples

6. Discussion and Outlook

In this paper we present a study of assertion correction, an important problem for KB curation, but one that has rarely been studied. We have proposed a general correction framework, which does not rely on any KB meta data or external information, and which exploits both both deep learning and consistency reasoning to correct erroneous objects and informally annotated literals (entity mentions). The framework and the adopted techniques have been evaluated by correcting assertions in two different KBs: DBpedia with cross-domain knowledge and an enterprise KB from the medical domain. We discuss below several more observations of the study as well as possible directions for the future work.

Entity relatedness. Our method follows the principle of correcting the object by a related entity rather than an arbitrary entity that leads to a correct assertion. Relatedness can be due to either lexical or semantic similarity. Currently, the recall for DBP-Lit and MED-Ent is and respectively, which is promising, but still leaves space for further improvement. One extension for higher recall but limited noise and sub-graph size is incorporating external resources like Wikipedia disambiguation pages or domain knowledge about the errors.

KB variation. Although both constraint-based validation and link prediction improve overall performance, their impact varies from DBpedia to the medical KB. The effectiveness of constraint-based validation depends on the richness of the KB schema, such as property functionality, the complexity of property range, etc. The more complex the schema is, the better performance constraint-based validation achieves. The impact of link prediction is more complicated: the path and node features perform better on DBpedia which has many more properties than the medical KB, while the semantic embeddings by DistMult and TransE are more suitable for the medical KB which has less properties but higher density. Integrating link prediction and constraint-based validation, even with simple score averaging, can improve performance, but further study is needed for a better integration method that is adapted to the structure of the given KB.

Property constraints. On the one hand, the evaluation indicates that the mined property constraints are effective for assertion validation and can be independently used in other contexts like online KB editing. On the other hand, unlike the link prediction model, the constraints as well as the consistency checking algorithm are interpretable. One benefit is that explicitly defined or external TBox constraints can easily be injected into our framework by overwriting or revising the constraints. For example, the mined specific range Person: in Table 5, which is inappropriate for the property dbp:homeTown, can be directly removed.

Acknowledgements

The work is supported by the AIDA project, The Alan Turing Institute under the EPSRC grant EP/N510129/1, the SIRIUS Centre for Scalable Data Access (Research Council of Norway, project 237889), the Royal Society, EPSRC projects DBOnto,  and . A part of the data and computation resource as well as Xi Chen’s contribution are supported by Jarvis Lab Tencent.

References