Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation

06/07/2022
by   Zhehui Zhou, et al.
0

Knowledge graph embedding (KGE) has been intensively investigated for link prediction by projecting entities and relations into continuous vector spaces. Current popular high-dimensional KGE methods obtain quite slight performance gains while require enormous computation and memory costs. In contrast to high-dimensional KGE models, training low-dimensional models is more efficient and worthwhile for better deployments to practical intelligent systems. However, the model expressiveness of semantic information in knowledge graphs (KGs) is highly limited in the low dimension parameter space. In this paper, we propose iterative self-semantic knowledge distillation strategy to improve the KGE model expressiveness in the low dimension space. KGE model combined with our proposed strategy plays the teacher and student roles alternatively during the whole training process. Specifically, at a certain iteration, the model is regarded as a teacher to provide semantic information for the student. At next iteration, the model is regard as a student to incorporate the semantic information transferred from the teacher. We also design a novel semantic extraction block to extract iteration-based semantic information for the training model self-distillation. Iteratively incorporating and accumulating iteration-based semantic information enables the low-dimensional model to be more expressive for better link prediction in KGs. There is only one model during the whole training, which alleviates the increase of computational expensiveness and memory requirements. Furthermore, the proposed strategy is model-agnostic and can be seamlessly combined with other KGE models. Consistent and significant performance gains in experimental evaluations on four standard datasets demonstrate the effectiveness of the proposed self-distillation strategy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2020

Multi-teacher Knowledge Distillation for Knowledge Graph Completion

Link prediction based on knowledge graph embedding (KGE) aims to predict...
research
09/13/2020

DistilE: Distiling Knowledge Graph Embeddings for Faster and Cheaper Reasoning

Knowledge Graph Embedding (KGE) is a popular method for KG reasoning and...
research
02/04/2023

Knowledge Graph Completion Method Combined With Adaptive Enhanced Semantic Information

Translation models tend to ignore the rich semantic information in triad...
research
03/22/2023

From Wide to Deep: Dimension Lifting Network for Parameter-efficient Knowledge Graph Embedding

Knowledge graph embedding (KGE) that maps entities and relations into ve...
research
11/04/2020

On Self-Distilling Graph Neural Network

Recently, the teacher-student knowledge distillation framework has demon...
research
11/14/2022

An Interpretable Neuron Embedding for Static Knowledge Distillation

Although deep neural networks have shown well-performance in various tas...
research
08/18/2022

Mind the Gap in Distilling StyleGANs

StyleGAN family is one of the most popular Generative Adversarial Networ...

Please sign up or login with your details

Forgot password? Click here to reset