Pre-training Transformers for Knowledge Graph Completion

03/28/2023
by   Sanxing Chen, et al.
0

Learning transferable representation of knowledge graphs (KGs) is challenging due to the heterogeneous, multi-relational nature of graph structures. Inspired by Transformer-based pretrained language models' success on learning transferable representation for texts, we introduce a novel inductive KG representation model (iHT) for KG completion by large-scale pre-training. iHT consists of a entity encoder (e.g., BERT) and a neighbor-aware relational scoring function both parameterized by Transformers. We first pre-train iHT on a large KG dataset, Wikidata5M. Our approach achieves new state-of-the-art results on matched evaluations, with a relative improvement of more than 25 mean reciprocal rank over previous SOTA models. When further fine-tuned on smaller KGs with either entity and relational shifts, pre-trained iHT representations are shown to be transferable, significantly improving the performance on FB15K-237 and WN18RR.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2022

KGLM: Integrating Knowledge Graph Structure in Language Models for Link Prediction

The ability of knowledge graphs to represent complex relationships at sc...
research
08/28/2020

HittER: Hierarchical Transformers for Knowledge Graph Embeddings

This paper examines the challenging problem of learning representations ...
research
09/14/2023

A Data Source for Reasoning Embodied Agents

Recent progress in using machine learning models for reasoning tasks has...
research
06/02/2019

Pre-training of Graph Augmented Transformers for Medication Recommendation

Medication recommendation is an important healthcare application. It is ...
research
07/08/2021

Graph Neural Pre-training for Enhancing Recommendations using Side Information

Leveraging the side information associated with entities (i.e. users and...
research
03/12/2023

LUKE-Graph: A Transformer-based Approach with Gated Relational Graph Attention for Cloze-style Reading Comprehension

Incorporating prior knowledge can improve existing pre-training models i...
research
11/14/2014

Learning Multi-Relational Semantics Using Neural-Embedding Models

In this paper we present a unified framework for modeling multi-relation...

Please sign up or login with your details

Forgot password? Click here to reset