Contrastive Knowledge Graph Error Detection

11/18/2022
by   Qinggang Zhang, et al.
0

Knowledge Graph (KG) errors introduce non-negligible noise, severely affecting KG-related downstream tasks. Detecting errors in KGs is challenging since the patterns of errors are unknown and diverse, while ground-truth labels are rare or even unavailable. A traditional solution is to construct logical rules to verify triples, but it is not generalizable since different KGs have distinct rules with domain knowledge involved. Recent studies focus on designing tailored detectors or ranking triples based on KG embedding loss. However, they all rely on negative samples for training, which are generated by randomly replacing the head or tail entity of existing triples. Such a negative sampling strategy is not enough for prototyping practical KG errors, e.g., (Bruce_Lee, place_of_birth, China), in which the three elements are often relevant, although mismatched. We desire a more effective unsupervised learning mechanism tailored for KG error detection. To this end, we propose a novel framework - ContrAstive knowledge Graph Error Detection (CAGED). It introduces contrastive learning into KG learning and provides a novel way of modeling KG. Instead of following the traditional setting, i.e., considering entities as nodes and relations as semantic edges, CAGED augments a KG into different hyper-views, by regarding each relational triple as a node. After joint training with KG embedding and contrastive learning loss, CAGED assesses the trustworthiness of each triple based on two learning signals, i.e., the consistency of triple representations across multi-views and the self-consistency within the triple. Extensive experiments on three real-world KGs show that CAGED outperforms state-of-the-art methods in KG error detection. Our codes and datasets are available at https://github.com/Qing145/CAGED.git.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/06/2023

Knowledge Graph Self-Supervised Rationalization for Recommendation

In this paper, we introduce a new self-supervised rationalization method...
research
05/17/2023

Investigating the Effect of Hard Negative Sample Distribution on Contrastive Knowledge Graph Embedding

The success of the knowledge graph completion task heavily depends on th...
research
01/20/2022

Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation

Graph contrastive learning is the state-of-the-art unsupervised graph re...
research
05/26/2023

Commonsense Knowledge Graph Completion Via Contrastive Pretraining and Node Clustering

The nodes in the commonsense knowledge graph (CSKG) are normally represe...
research
06/16/2022

Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

Leading graph contrastive learning (GCL) methods perform graph augmentat...
research
11/19/2022

Relational Symmetry based Knowledge Graph Contrastive Learning

Knowledge graph embedding (KGE) aims to learn powerful representations t...
research
02/11/2023

Multispectral Self-Supervised Learning with Viewmaker Networks

Contrastive learning methods have been applied to a range of domains and...

Please sign up or login with your details

Forgot password? Click here to reset