-
Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks
Large knowledge graphs often grow to store temporal facts that model the...
read it
-
TTMF: A Triple Trustworthiness Measurement Frame for Knowledge Graphs
The Knowledge graph (KG) uses the triples to describe the facts in the r...
read it
-
KGEval: Estimating Accuracy of Automatically Constructed Knowledge Graphs
Automatic construction of large knowledge graphs (KG) by mining web-scal...
read it
-
AceKG: A Large-scale Knowledge Graph for Academic Data Mining
Most existing knowledge graphs (KGs) in academic domains suffer from pro...
read it
-
Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning with Confidence
Knowledge graphs (KGs) can provide significant relational information an...
read it
-
Next Stop "NoOps": Enabling Cross-System Diagnostics Through Graph-based Composition of Logs and Metrics
Performing diagnostics in IT systems is an increasingly complicated task...
read it
-
Exploring the Challenges towards Lifelong Fact Learning
So far life-long learning (LLL) has been studied in relatively small-sca...
read it
Efficient Knowledge Graph Validation via Cross-Graph Representation Learning
Recent advances in information extraction have motivated the automatic construction of huge Knowledge Graphs (KGs) by mining from large-scale text corpus. However, noisy facts are unavoidably introduced into KGs that could be caused by automatic extraction. To validate the correctness of facts (i.e., triplets) inside a KG, one possible approach is to map the triplets into vector representations by capturing the semantic meanings of facts. Although many representation learning approaches have been developed for knowledge graphs, these methods are not effective for validation. They usually assume that facts are correct, and thus may overfit noisy facts and fail to detect such facts. Towards effective KG validation, we propose to leverage an external human-curated KG as auxiliary information source to help detect the errors in a target KG. The external KG is built upon human-curated knowledge repositories and tends to have high precision. On the other hand, although the target KG built by information extraction from texts has low precision, it can cover new or domain-specific facts that are not in any human-curated repositories. To tackle this challenging task, we propose a cross-graph representation learning framework, i.e., CrossVal, which can leverage an external KG to validate the facts in the target KG efficiently. This is achieved by embedding triplets based on their semantic meanings, drawing cross-KG negative samples and estimating a confidence score for each triplet based on its degree of correctness. We evaluate the proposed framework on datasets across different domains. Experimental results show that the proposed framework achieves the best performance compared with the state-of-the-art methods on large-scale KGs.
READ FULL TEXT
Comments
There are no comments yet.