DeepAI
Log In Sign Up

Disconnected Emerging Knowledge Graph Oriented Inductive Link Prediction

Inductive link prediction (ILP) is to predict links for unseen entities in emerging knowledge graphs (KGs), considering the evolving nature of KGs. A more challenging scenario is that emerging KGs consist of only unseen entities, called as disconnected emerging KGs (DEKGs). Existing studies for DEKGs only focus on predicting enclosing links, i.e., predicting links inside the emerging KG. The bridging links, which carry the evolutionary information from the original KG to DEKG, have not been investigated by previous work so far. To fill in the gap, we propose a novel model entitled DEKG-ILP (Disconnected Emerging Knowledge Graph Oriented Inductive Link Prediction) that consists of the following two components. (1) The module CLRM (Contrastive Learning-based Relation-specific Feature Modeling) is developed to extract global relation-based semantic features that are shared between original KGs and DEKGs with a novel sampling strategy. (2) The module GSM (GNN-based Subgraph Modeling) is proposed to extract the local subgraph topological information around each link in KGs. The extensive experiments conducted on several benchmark datasets demonstrate that DEKG-ILP has obvious performance improvements compared with state-of-the-art methods for both enclosing and bridging link prediction. The source code is available online.

READ FULL TEXT VIEW PDF

page 1

page 11

page 12

11/15/2022

Few-Shot Inductive Learning on Temporal Knowledge Graphs using Concept-Aware Information

Knowledge graph completion (KGC) aims to predict the missing links among...
07/28/2022

Subgraph Neighboring Relations Infomax for Inductive Link Prediction on Knowledge Graphs

Inductive link prediction for knowledge graph aims at predicting missing...
07/10/2021

Improving Inductive Link Prediction Using Hyper-Relational Facts

For many years, link prediction on knowledge graphs (KGs) has been a pur...
11/22/2022

Relation-dependent Contrastive Learning with Cluster Sampling for Inductive Relation Prediction

Relation prediction is a task designed for knowledge graph completion wh...
11/01/2019

InteractE: Improving Convolution-based Knowledge Graph Embeddings by Increasing Feature Interactions

Most existing knowledge graphs suffer from incompleteness, which can be ...
10/25/2022

Line Graph Contrastive Learning for Link Prediction

Link prediction task aims to predict the connection of two nodes in the ...
08/18/2020

NASE: Learning Knowledge Graph Embedding for Link Prediction via Neural Architecture Search

Link prediction is the task of predicting missing connections between en...

I Introduction

Knowledge Graphs (KGs), such as Freebase[3], NELL[5]

, and DBpedia

[1] play a critical role in many applications like information retrieval [41, 48], recommendation systems [52, 49], multi-hop query [39], and question answering[16, 18]. A typical KG models data as a collection of facts and specify entities as nodes, relations as edges, having a strong ability to represent structured data. Predicting missing facts in KGs, also known as KG link prediction, is a widely studied problem [40] and has been proven successful benefiting from recent KG embedding methods [4, 9, 29].

Fig. 1: A motivating example of inductive link prediction. The original KG in left and DEKG in right are two disconnected KGs without any edges connecting them.

Despite the success, KG link prediction remains challenging in many real-world scenarios. Shi and Weninger reported [31] that KGs are dynamically evolving rather than staying static, e.g., around 200 unseen entities emerged every day during late 2015 and early 2016 in DBpedia. However, the traditional transductive KG embedding methods are ineffective for emerging KGs as the new entities are unseen during training. Although this problem can be solved by retraining models with the whole graph combined with unseen elements, the consumption of time and computation is intolerable in real-world applications [40, 11]

. To address this problem, increasing attention has been paid to inductive link prediction (ILP), which aims to predict links for unseen entities in the emerging KGs. Several graph neural network based methods

[11, 38, 2] have been proposed by transferring information from original KGs to emerging KGs to obtain the embeddings of unseen entities without retraining the whole KG.

Recently, [34] introduced a more challenging scenario where emerging KGs consist of unseen entities only and the edges between the original KGs and emerging KGs are not observed, called as disconnected emerging KGs (DEKGs) scenario. To be noted, the relation space is shared between the original KGs and DEKGs (i.e., there are only unseen entities and no unseen relations in DEKGs). For this scenario, Grail[34] and TACT[6] have been proposed to predict the enclosing links, where both the head and tail entities are unseen entities inside DEKGs, by learning logical rules and reasoning over subgraph structures in an entity-independent manner. Despite the great contributions made by [34] and [6], the prediction of links connecting DEKGs and original KGs, which are formulated as bridging links, has not been exploited so far.

Fig. 1 presents a motivating example of bridging link prediction for DEKGs. It describes KGs of the NBA 2008 draft, where the DEKG in Fig. 1(b) is composed of new entities and shares the common relation space with the original KG in Fig. 1(a). In this example, the participation of Russell to Thunder, i.e., the bridging link (Thunder, employ, Russel), brought a significant benefit to Thunder in the following seasons. Generally, the absence of bridging links between two disconnected KGs is common in real-world applications, while these links usually imply some critical information, such as the drug-drug interaction that helps develop new medicine (e.g., the discovery of Artemisinin) or the connection between two cases that was ignored by the police. Actually, the motivation of this paper also comes from a real-world criminal case in 2015111https://baijiahao.baidu.com/s?id=1718118167319289628. A neglected connection between the case and another seemingly unrelated one that happened several years ago brought a significant breakthrough in cracking both cases. In summary, revealing the connection between original KGs and DEKGs (i.e., predicting bridging links) can benefit many cross-graph applications [40, 17, 24].

However, existing studies for DEKGs (i.e., Grail [34] and TACT [6]) cannot handle this problem effectively. This is because their target is to predict enclosing links (i.e., yellow dashed link in Fig. 1) inside DEKGs using the subgraph reasoning method, which seriously suffers from the topological limitation in DEKGs. Specifically, the topological limitation means there is no connected subgraph around a bridging link. Taking the target bridging link (Thunder, employ, Russell) in Fig. 1 as an example, the head entity Thunder and the tail entity Russell are in two disconnected KGs respectively. So the subgraphs constructed for this target link are two disconnected subgraphs where one is around Thunder in the DEKG and the other one is around Russell in the original KG, and there is no path connecting the two subgraphs. Unfortunately, the underlying idea of Grail [34] and TACT [6] relies on the connectivity between two entities to perform path reasoning over a connected subgraph. For example, they rely on the relational path (Kevin Love Russell UCLA Bruins) to predict the link (UCLA Bruins, employ, Kevin Love), while the path does not exist for a bridging link as we discussed above. Consequently, both Grail [34] and TACT [6] are problematic in predicting bridging links for DEKGs.

To deal with the above-mentioned problem, we propose a novel model namely DEKG-ILP, which contains two modules: Contrastive Learning-based Relation-specific Feature Modeling (CLRM) and GNN-based Subgraph Modeling (GSM). Both CLRM and GSM are carefully designed to deal with the topological limitation problem, where CLRM is a fundamentally novel module to exploit global semantic information in KGs, and GSM is an improved method developed from Grail [34] to extract local topological around links. Specifically, the module CLRM first extracts global relation-based semantic information shared between original KGs and DEKGs and represents entities in an entity-independent manner. The key idea is that the semantic representation of an entity is computed based on its associated relations (e.g., Russell is recognized as an Employee and a Sports player since he is associated with the relations teammate, employed by, and coach). Following this intuition, the feature for each relation is defined and then entities can be represented as a fusion of the relation corresponding features. In this way, the entities in original KGs and DEKGs are linked via the shared relation space rather than topological graph structure, thus tackling the topological limitation. Additionally, a novel contrastive learning enabled sampling strategy is designed to generate positive and negative examples for each entity to optimize the relation-specific features. Secondly, the GNN-based subgraph modeling module GSM is employed to exploit the local topological information of the subgraph around each link in KGs. A novel node labeling method is proposed in GSM to simulate the disconnected nodes and deal with the topological limitation problem. Compared with existing studies [34, 6] that only focus on the prediction of enclosing links, both bridging links and enclosing links are exploited in this work.

To summarize, our main contributions are as follow:

  • We extend the existing formulation of inductive link prediction for unseen entities in a disconnected emerging scenario, by considering enclosing links and especially bridging links simultaneously.

  • We propose a novel model DEKG-ILP which can effectively solve the extended inductive link prediction task. Two carefully designed modules are included in DEKG-ILP. Firstly, the module CLRM is used to extract global relation-based semantic features that are shared between original KGs and DEKGs, where a contrastive learning method is employed to optimize these features. Secondly, a GNN-based subgraph modeling module GSM is used to exploit the local topological information around each link in KGs.

  • The comprehensive experiments conducted on several benchmark datasets demonstrate that our proposed model DEKG-ILP outperforms existing methods on predicting enclosing links. Moreover, different from existing methods, DEKG-ILP is able to predict bridging links.

The rest of the paper is organized as follows. Section II reviews the related work of KG link prediction and contrastive learning methods. Section III introduces the basic concepts and formulates the problem. Section IV firstly provides an overview of our proposed model DEKG-ILP and then describes the model in detail. We provide our experimental setup and results in Section V and conclude this work in Section VI.

Model Transductive Link Prediction Inductive Link Prediction
Common
Emerging KG
Disconnected Emerging KG
Enclosing Link Bridging Link
Transductive Methods TransE [4]
RotatE [33]
ConvE [9]
Inductive Methods MEAN [11]
GEN [2]
Neural LP [47]
RuleN [20]
Grail [34]
TACT [6]
DEKG-ILP
TABLE I: Summary of KG link prediction methods. ✓means being able to handle this task and means not.

Ii Related Work

The related studies, which contain the methods of link prediction and contrastive learning, are presented in this section. Additionally, TABLE I provides a summary of what tasks these link prediction methods can handle respectively.

Ii-a Transductive Link Prediction Methods.

Transductive embedding methods require that all entities can be obtained during training. Translational distance based methods [4, 42, 19] measure the plausibility of facts as the distance between entities carried out by relations. [33, 51] further improve the translational distance based methods by modeling the distance translation between entities as a rotation in the complex space. Factorization based models [25, 46, 36] extract the latent semantics of entities and relations in their embedding space. Recently, [9, 23] reshape KG embedding and employ CNN as the encoder. Some studies [29, 30] also employ GNNs to aggregate neighborhood information to exploit structural information in the graph. Unfortunately, all the above methods have to retrain the whole graph when new KGs emerge and cannot generalize to unseen entities. However, the time and computational cost of retraining is intolerable in real-world applications [40, 11].

Ii-B Inductive Link Prediction Methods.

Ii-B1 Additional Information Methods

Inductive methods aim to predict links for unseen entities in emerging KGs without retraining the whole graph. Some works [44, 43] embed unseen entities using additional information like text descriptions or images. However, these methods could be limited when the additional information is missing or insufficient, which is usually the case in real-world KGs. Moreover, constructing additional information for KGs also costs. Thus the additional information based method is not a good solution to ILP.

Ii-B2 Rule Induction Methods

Rule learning based methods [10, 20, 26] induce probabilistic logical rules by enumerating statistical regularities and patterns present in the knowledge graph and are inherently inductive since the rules are independent of node identities. The traditional rule-based method [22]

mines rules from data by inductive logic programming but suffers from the problem of scaling to large datasets and being challenging to optimize. Recently, Neural LP

[47] proposes an end-to-end differentiable framework to learn rules using TensorLog [8] operators. DRUM [28] further improves Neural LP by mining more accurate logical rules. However, rule learning based methods mainly focus on mining horn rules, limiting their ability to model more complex semantic correlations between relations in knowledge graphs.

Ii-B3 Embedding based Methods

Several GNN-based methods [11, 38, 2] obtain the embeddings of unseen entities by aggregating information from original KGs to emerging KGs. MEAN [11] employs GNNs to encode unseen entities by aggregating information from its neighbors in original KGs with a simple pooling function. LAN [38] introduces two attention mechanisms for the GNN in their model, where the attention weights are computed with logic rules and learned using neural networks respectively. VN network [14] is proposed to solve the data sparsity problem by constructing virtual neighbors for unseen entities. GEN [2] further employs a meta-learning framework to simulate the emerging KGs scenario during training to make the model able to generalize to unseen entities inherently. Recently, Grail [34] introduces a more challenging scenario that we termed as DEKGs. In this scenario, all the above-mentioned methods are problematic as they are based on the graph message passing methods, which depend on the edges that exist between the original and emerging KGs, thus conflict with the scenario of DEKGs. To handle this problem, Grail [34] and TACT [6] are proposed to investigate the problem of enclosing link prediction in DEKGs by reasoning over local subgraph structures. Despite the great contributions made by Grail and TACT, their proposed methods are not suitable for predicting bridging links.

Ii-C Contrastive Learning in Graph.

Contrastive learning is a class of self-supervised methods and has been applied in many CV and NLP applications[21, 13, 53]. Recently, contrastive learning can be found in several graph representation learning algorithms. DGI [37] extends deep Infomax [15] via contrasting node and graph encoding. [27, 50, 7, 32] employ contrastive learning methods and generate positive and negative examples based on topological information of different graph structures. However, KGs contain not only topological information, thus we propose a novel sampling strategy for contrastive learning to fully utilize the semantic information in KGs, aiming to obtain better representations of entities.

Overall, KG link prediction in both transductive and inductive scenarios has been studied by previous work from many different perspectives, yet no study considers the bridging links in the DEKG scenario. In addition, there is no contrastive learning method specially designed for KGs to fully exploit the semantic information in KGs. Consequently, we propose the model DEKG-ILP in this work to consider both enclosing links and bridging links with a novel contrastive learning sample strategy specially designed for KGs.

Iii Problem Definition

In this section, we present definitions used throughout the paper and formulate the extended inductive link prediction problem in a disconnected emerging scenario.

Definition 1 (Knowledge Graph).

Let denotes the set of entities and denotes the set of relations. A knowledge graph (e.g., Fig. 1(a)) models data as a collection of triplets , where and . Accordingly, a KG can be denoted as .

Definition 2 (Disconnected Emerging Knowledge Graph).

In a disconnected emerging scenario, an emerging KG (e.g., Fig. 1(b)) consists of unseen entity set and common relation set shared with the original KG , without edges being observed between the two KGs. Formally, a DEKG can be denoted as , where .

Definition 3 (Enclosing Links).

The enclosing links (e.g., the yellow dashed link in Fig. 1) represent the links between unseen entities in the emerging KG , where both the head entity and tail entity are unseen. Formally, an enclosing link is denoted as .

Definition 4 (Bridging Links).

The bridging links (e.g., the green dashed link in Fig. 1) refer to the links that bridge and , where one of the head or tail entity is known and the other is unknown. Formally, a bridging link is denoted as .

Definition 5 (Problem Formulation).

Given an original KG and a DEKG , the extended inductive link prediction task aims to predict both bridging links and enclosing links for unseen entities in . Specifically, we perform link prediction for each link in all the forms of , , , where the link can be either an enclosing link or a bridging link.

Fig. 2: Overview of DEKG-ILP. CLRM shows the construction of relation-component tables for the head and tail entities of the target link, as well as the sampling process for the two entities in contrastive learning. GSM outputs , , as the topological representation for , , and the whole graph.

Iv Proposed Model DEKG-ILP

In this section, we firstly introduce the model overview of the proposed model DEKG-ILP, which consists of two different modules CLRM and GSM. Then the two modules and model training object are discussed in detail.

Iv-a Model Architecture Overview

The overview of our proposed model DEKG-ILP, which consists of two modules CLRM and GSM, is presented in Fig. 2. Specifically, CLRM extracts global relation-based semantic features shared across KGs, where the contrastive learning is employed to optimize the extracted features with a carefully designed sampling strategy. GSM exploits the local topological information around each link in KGs. To predict a target link (e.g., the orange dashed link in Fig. 2), CLRM embeds the head and tail entities (i.e., and ) with only the entities’ directly associated relations and calculates the score from the semantic perspective. GSM considers the multi-hop subgraph around the target link to embed head entity, tail entity, as well as the whole subgraph (i.e., , , ) and calculates the score from the topological perspective. The final score for the target link is output by combining the scores from two different modules.

Iv-B Module CLRM

In this module, the Relation-specific Feature Modeling is designed to extract semantic features for relations shared by original KGs and DEKGs from a global perspective, and represent entities with these features in an entity-independent manner. Furthermore, a contrastive learning based method is employed to optimize these features with a novel semantic-aware sampling strategy to fully exploit the semantic information in KGs. An example of the motivation behind CLRM is given in Fig. 3.

Fig. 3: A example of CLRM to illustrate how this module represents entities in an entity independent manner and how the semantic-aware sample strategy generates positive and negative examples for each entity.

Iv-B1 Relation-specific Feature Modeling

In KGs, intuitively, the semantic component of an entity, i.e., what the entity consists of from a semantic perspective, is influenced by its associated relations. Continuing with the example in Fig. 1, we further illustrate this motivation in Fig. 3(a). Specifically, Thunder can be recognized as an Employer from the semantic perspective as it is the head entity of relation employ and is the tail entity of relation employed by. Meanwhile, Thunder is also a Sports team due to the associated relation team coach. Thus an appropriate embedding for Thunder should be a fusion of the features representing an Employer and a Sports team. Following this intuition, an embedding method is first designed to represent entities with their relation features in original KGs. Then, the unseen entities in DEKGs can also be represented with these relation features based on this method, since the relations are shared between original KGs and DEKGs. In this way, the seen and unseen entities can be embedded into the same feature space. For example, if the relation-specific features for employed by, teammate, and coach have been learned in the original KG, then the unseen entity Russell in the DEKG can be directly represented with these features. Next, the relation employ can be predicted between Thunder and Russell because they are recognized as an Employer and an Employee respectively from the semantic perspective. Inspired by this, CLRM first extracts features for each relation and then represents entities by fusing these relation-specific features. Formally, the set of relation-specific features extracted for relations is denoted as:

(1)

where is a learned embedding we defined in our model to represent the semantics of each relation , and . Then the semantic information of an entity can be modeled as a relation-component table denoted as:

(2)

where denotes the number of triplets with relation that the entity is associated with, and is set to if there is no triplet with relation . Note that, the relation-component table of each entity is constructed using only the information of an entity’s associated relations, thus Eq. 2 can be generalized to both seen entities in or unseen ones in . The examples of relation-component table and for entity and are given in Fig. 2.

Based on the relation-component table , each entity can be represented as a fusion of corresponding relation-specific features. Formally, the semantic representation of entity is defined as:

(3)

where is the fusion function for relation-component table and relation-specific features . Notably, is constructed from the associated triplets of entity , and is extracted for the relations shared between original KGs and DEKGs. In this way, the representation for an entity can be calculated with only associated relations and , instead of initializing an embedding and fine-tuning it during training. Based on this method, CLRM can model data in an entity-independent manner and naturally generalize to unseen entities.

Finally, the score function for the semantic likelihood of a triplet is defined as:

(4)

where is a learned embedding of relation from the semantic perspective,

denotes the element-wise product for embedding vectors inspired by DistMult

[46]. Notably, is used as a weight matrix for the relation in the DistMult-based decoder to calculate the score for a triplet . The reason for choosing DistMult as our decoder is that DistMult is a semantic matching model, which aligns with the intuition of CLRM that we want to extract semantic information behind KGs. What’s more, although DistMult is a transductive KGE method, it is only used as a decoder thus whether it is transductive or inductive does not matter.

Iv-B2 Semantic-aware Contrastive Learning

Inspired by the success of contrastive learning in graph representation learning [32, 27], a carefully designed contrastive learning based method is used to optimize the relation-specific features. The major novelty is that a semantic-aware sampling strategy is designed in the contrastive learning process. Intuitively, the essential semantic of an entity is stable if no new relation is attached or no relation is completely removed from this entity. In other words, we assume that there is a significant change of an entity’s semantic if the entity is attached with new relations or all triplets with a particular relation of the entity are deleted, which aligns with the intuition. To model the semantic variation of entities, for each entity , we define three different random operations, i.e., relation variation , relation addition , and relation deletion for its corresponding relation-component table.

Following the example in Fig. 3, the social image of Russell as a Sports player will retain stable if he has more or fewer triplets of relation teammate (i.e., in Fig. 3(b), we add a triplet of teammate with the entity ). Because there still exists triplets that can provide the semantics of Sports player to him in the KG. But his social image will change significantly if all the triplets with relations teammate and coach are deleted or he is added with a new triplet of relation father of (i.e., we delete all triplets with relations teammate and coach in Fig. 3(c) and add a triplet with a new relation father of in Fig. 3(d)). Because the triplets that can provide him with the semantics of Sports player are all deleted, and the added triplet with relation father of will attach the new semantics of Father to him. Following this intuition, our sample strategy in the contrastive learning method generates positive examples with relation variation and negative examples with relation addition and deletion. Formally, the definitions of the three random operations are as follows.

Relation Variation

In operation , the number of triplets with a particular relation of entity is randomly varied. Formally, select a number in , it is randomly varied to another integer in the range of , where is the hyper-parameter of scaling factor, denotes the average number of triplets associated with each relation, denoted as:

(5)
Relation Addition

In operation , the triplets of a randomly selected new relation are attached to entity . Formally, select a number in , is randomly set to an integer in the range of , where and are the same as .

Relation Deletion

In operation , all the triplets with a particular relation of entity are deleted. Formally, select a number in , is set to 0.

Based on the three random operations, given an entity , a sequence of are applied to generate a positive example with corresponding relation-component table , a sequence of and are applied to generate a negative example with corresponding relation-component table . Then the representations of the positive and negative examples can be obtained by the fusion function in Eq. (3) as:

(6)

Next, the contrastive learning loss is calculated with a triplet loss function to maximize the similarity between positive pair

and minimize the similarity between negative pair . Formally,

(7)

where and is the hyper-parameter for the margin. is a function that measures the similarity between two embedding vectors by calculating the euclidean distance between them. The contrastive learning loss will be used in the final learning objective in Eq. (15) to optimize the relation-specific features. Notably, the contrastive learning is only employed during training, i.e., the above operations only consider the original KG .

Fig. 4: An example of GSM to illustrate how this module embeds a single graph for enclosing links or two disconnected graphs for bridging links.

Iv-C Module GSM

To take full advantage of the topological information in the subgraph around links and relieve the topology limitation in bridging link prediction, we extend the idea of Grail [34] with an improved node labeling method. To be specific, the underlying idea of performing subgraph reasoning depending on the edges between entities (i.e., the solid blue arrows in Fig. 4(a)) still works in GSM when predicting enclosing links. Then, the improved node labeling method enables GSM to extract topological information for two disconnected graphs when predicting bridging links as shown in Fig. 4(b).

Iv-C1 Subgraph Extraction

For a triplet , the subgraph is constructed based on the -hop neighbors of and in . The time complexity of the subgraph extraction is ) [34], where , , and denote the number of nodes, relations, and edges respectively, means extracting a -hop subgraph, and denotes the dimension of embedding vectors. Note that, GSM will extract a single subgraph for an enclosing link and two disconnected subgraphs for a bridging link as shown in Fig. 4.

Iv-C2 Node Labeling

To model the relative position information in , each node in this subgraph is labeled as , where denotes the distance of the shortest path between and without any path through . and are uniquely labeled as and respectively. Then, the input embedding of node can be represented as , where denotes the concatenation of two embedding vectors. represents a -dimension one-hot vector, where the -th entry is set to 1. However, the node labeling method in Grail suffers from topological limitation and can not handle the situation in Fig. 4(b). In GSM, we improve the labeling method as follow.

Observed from Fig. 4(b), the white nodes in one subgraph is unreachable to the nodes in the other subgraph. Grail consider only the situation in Fig. 4(a) and prunes the nodes in (i.e., the green nodes) as it assumes that these nodes are redundant to form an enclosing subgraph around the target link. However, we argue that these nodes can simulate the disconnected nodes as they are reachable to the node on one end of the target link while unreachable to the other in -hop, thus still carry useful information. Therefore, we remain the nodes in and set as if . The one-hot vector of is set to all zero.

Iv-C3 Topological Information Modeling

With the enclosing subgraph around link and the initial one-hot embeddings of nodes, R-GCN [29] with specially designed edge attention AGGREGATE function [34] is employed to obtain the topological information of the subgraph around the link. In particular, the architecture for the -th layer of the GNN is denoted as:

(8)
(9)

where is the collection of direct neighbors of entity , is the aggregated message from these neighbors and is the topological representation of entity in the -th layer. Then a -layers GNN is used to obtain the representation of each entity in the subgraph . The representation for the entire subgraph is obtained by applying an average-pooling on the representations of all entities as:

(10)

where denotes the set of nodes in subgraph . Finally, the score for the topological likelihood of the link is given by:

(11)

where is a learned embedding of relation from the topological perspective and is a linear weight matrix.

Iv-D Training Objective

To train the entire model, all triplets in the original KG are naturally served as positive triplets denoted as . Then the negative sampling is performed on each by randomly corrupting the head or tail entity with another entity in to construct negative triplet set . Formally, the set of corrupted negative triplets is denoted as:

(12)

To encourage the decoder to consider both the global semantic information and local topological information, the score of each link is calculated as the sum of scores in Eq. (4) and Eq. (11). Formally, the score is defined as:

(13)

Then we employ a margin-based ranking loss to assign high scores for positive triplets and low scores for negative ones,

(14)

Finally, the overall training objective of the proposed model DEKG-ILP is to minimize the final loss that is defined as:

(15)

where is a hyper-parameter. Notably, the contrastive learning loss is only used during training to optimize the relation-specific features. The score for each link during testing is directly calculated by the score function in Eq. (13). The training process of DEKG-ILP is summarized in Algorithm 1.

0:    The positive triplets and negative triplets ; Subgraph for each link; Relation-component table for each entity . Positive and negative examples and for each entity .;
1:  Initialize:
2:  The relation-specific features ;
3:  The learned parameters in GNNs model;
4:  The learned embeddings and ;
5:  repeat
6:     for each triplet ( in the batch do
7:        Get the embedding for entity and with , , in Eq. (3).
8:        Calculate the score in Eq. (4);
9:        Input the subgraph into the GNN model and get at the layer;
10:        Get topological score in Eq. (11);
11:        Combine and to obtain in Eq. (13);
12:        Calculate the socres for the corresponding negative triplets in the same way.
13:        Calculate the triplet score loss in Eq. (14);
14:        Get the embedding and for the positive and negative examples with , , and ;
15:        Compute the contrastive loss in Eq. (7);
16:        Minimize the final loss in Eq. (15) and update in the back-propagation process;
17:     end for
18:  until

 the final epoch

18:  
Algorithm 1 Training process of DEKG-ILP

V Experiments

In this section, we first introduce the experimental configuration, including datasets, baselines, evaluation metrics, and parameter setup. Then, we present the main results of our proposed model and all compared baselines on several benchmark datasets. We further provide the performance of DEKG-ILP on predicting

enclosing links only and bridging links only respectively. Moreover, ablation study, complexity analysis and case study are also presented. The source code is available at https://github.com/Ninecl/DEKG-ILP.

V-a Dataset

To evaluate the performance of all compared methods on inductive link prediction with both enclosing links and bridging links in DEKGs, additional links are extracted from corresponding real-world raw KGs for testing based on the benchmark datasets provided by Grail [34]. The final evaluation datasets are constructed by mixing up enclosing links and bridging links in different ratios to consider the impact of different data compositions.

FB15k-237 NELL-995 WN18RR
EQ 180 1594 5226 14 3103 5540 9 2746 6678
142 1093 2404 14 225 1034 8 922 1991
MB 200 2608 12085 88 2564 10109 10 6954 18968
172 1660 5570 79 2086 5997 10 2757 5304
ME 215 3668 22394 142 4647 20117 11 12078 32150
183 2501 9569 122 3566 10072 11 5084 7772
TABLE II: Statistics of datasets, and , , and denote the numbers of relations, entities, and triplets in and

Specifically, Grail has extracted four different datasets v1, v2, v3, and v4 from three raw real-world KGs (i.e., FB15k-237

[9], NELL-995 [45], and WN18RR [35]) respectively with different scales, and these datasets have been split into the original KG for training and the DEKG for testing. In our experiments, we further construct three datasets, EQ (equal links), MB (more bridging links), ME (more enclosing links) for FB15k-237, NELL-995, and WN18RR respectively based on datasets v1, v2, and v3 released in Grail. During our training stage, is used as training set same as Grail. During the testing stage, apart from triplets in which serve as enclosing links for evaluation, we extract certain number of triplets that bridge and from the corresponding raw KGs as bridging links for evaluation as well. Note that, these bridging links are real links extracted from the raw KGs. Finally, the evaluation datasets are constructed by mixing up these enclosing links and bridging links in the ratios of 1:1, 1:2, 2:1 for EQ, MB, ME respectively. TABLE II presents the statistics of these datasets.

V-B Baseline

The models Grail [34] and TACT [6] are used as main baselines since they are both proposed for DEKGs. RuleN [20], GEN [2], TransE [4], RotatE [33], and ConvE [9] are compared as the representation of rule-mining based, GNNs based, distance based, rotation based, and neural network based methods to explore how these methods perform for DEKGs. Note that, Grail, TACT, RuleN, and GEN can be directly applied to the above constructed DEKG datasets as they are inductive inherently. We implement these four baselines according to the source codes they released online. To implement the rest transductive methods TransE, RotatE, ConvE in an inductive scenario, OpenKE [12] is extended as follow: we first train these methods on the original KG to get the embeddings of seen entities and relations. Then, the embeddings of unseen entities in the emerging KG are randomly initialized because they cannot be obtained during training. Finally, we calculate the scores for inductive links with these embeddings. Notably, all baselines are implemented with the optimal parameter settings reported in their papers.

V-C Evaluation Metric

Like most related studies, Mean Reciprocal Rank (MRR) and Hits at N (Hits@N) are used as the evaluation metrics in our experiments. As Grail is evaluated on head/tail prediction while TACT only considers relation prediction, for a fair comparison, we extend these baselines to all the forms of prediction tasks including , , and . All the negative triplets for testing are constructed by replacing elements in triplets with the candidate entity and relation set containing all entities and relations in and . The ranks are measured in a filtered setting where all the triplets appeared in training, valid, and test set are removed. What’s more, all the models are run five times on each dataset with different random seeds and the average results are reported.

V-D Parameter Setup

The hyper-parameters of our model DEKG-ILP include learning rate during training time, embedding dimension of relation-specific features, edge dropout rate that denotes the dropout percent of the edges in the GNN model, and the loss coefficient used to adjust the weight of contrastive learning loss in Eq. (15). A grid search is conducted on the validation sets to find the hyper-parameters with optimal performance. We sample 1 negative triplet for each positive triplet in training set to calculate the triplet score loss in Eq. (14), sample 10 positive and negative examples for each entity respectively to calculate the contrastive learning loss in Eq. (7). We fine-tune the learning rate in {0.1, 0.01, 0.001, 0.0005}, relation-specific feature dimension in {16, 32, 64, 128}, edge dropout in {0.1, 0.3, 0.5, 0.8}, and coefficient in {0.01, 0.1, 0.5, 1}. The optimal configuration in the inductive link prediction task is , , ,

Datasets Models EQ MB ME
MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10 MRR Hits@1 Hits@5 Hits@10

FB15k-237

TransE 0.241 0.169 0.264 0.337 0.210 0.143 0.249 0.310 0.197 0.123 0.218 0.317
RotatE 0.089 0.021 0.095 0.192 0.101 0.025 0.125 0.233 0.094 0.022 0.105 0.209
ConvE 0.102 0.033 0.105 0.227 0.097 0.031 0.105 0.200 0.119 0.030 0.114 0.228
GEN 0.093 0.032 0.089 0.196 0.109 0.030 0.130 0.241 0.101 0.027 0.110 0.207
RuleN 0.265 0.237 0.267 0.268 0.212 0.186 0.239 0.240 0.402 0.360 0.443 0.447
Grail 0.279 0.216 0.323 0.342 0.226 0.164 0.259 0.281 0.456 0.378 0.523 0.569
TACT 0.227 0.130 0.316 0.401 0.186 0.101 0.249 0.339 0.311 0.222 0.382 0.463
DEKG-ILP 0.508 0.351 0.693 0.841 0.535 0.396 0.693 0.832 0.634 0.512 0.785 0.891

NELL-995

TransE 0.083 0.023 0.079 0.156 0.234 0.161 0.263 0.345 0.158 0.088 0.177 0.269
RotatE 0.118 0.021 0.170 0.331 0.090 0.027 0.089 0.179 0.091 0.014 0.094 0.187
ConvE 0.098 0.035 0.108 0.172 0.102 0.033 0.088 0.194 0.100 0.026 0.102 0.179
GEN 0.091 0.029 0.084 0.181 0.132 0.105 0.156 0.239 0.127 0.058 0.142 0.223
RuleN 0.234 0.197 0.230 0.258 0.300 0.249 0.336 0.340 0.452 0.371 0.508 0.510
Grail 0.193 0.109 0.231 0.393 0.307 0.233 0.352 0.423 0.411 0.305 0.518 0.614
TACT 0.156 0.071 0.221 0.328 0.223 0.125 0.305 0.420 0.292 0.165 0.428 0.557
DEKG-ILP 0.353 0.218 0.489 0.631 0.468 0.301 0.694 0.830 0.532 0.380 0.727 0.842

WN18RR

TransE 0.133 0.073 0.135 0.353 0.164 0.105 0.167 0.369 0.160 0.065 0.126 0.389
RotatE 0.161 0.038 0.271 0.476 0.142 0.030 0.215 0.458 0.135 0.028 0.197 0.407
ConvE 0.111 0.051 0.124 0.308 0.122 0.049 0.165 0.333 0.129 0.049 0.115 0.327
GEN 0.158 0.101 0.147 0.372 0.153 0.098 0.170 0.365 0.148 0.051 0.119 0.352
RuleN 0.382 0.342 0.402 0.410 0.252 0.232 0.241 0.466 0.335 0.312 0.334 0.434
Grail 0.401 0.320 0.473 0.613 0.261 0.179 0.344 0.513 0.341 0.243 0.424 0.611
TACT 0.442 0.328 0.578 0.593 0.335 0.231 0.455 0.472 0.321 0.224 0.431 0.475
DEKG-ILP 0.471 0.350 0.607 0.701 0.359 0.240 0.480 0.625 0.378 0.245 0.534 0.685
TABLE III: Main result on EQ, MB, ME of FB15k-237, NELL-995, and WN18RR

V-E Main Results

The overall results, where both enclosing links and bridging links are contained in the test set, of all methods on EQ, MB, ME of FB15k-237, NELL-995, and WN18RR are presented in TABLE III. MRR, Hits@1, Hits@5, and Hits@10 are reported here. Several observations can be obtained from these tables: 1) DEKG-ILP outperforms all baselines consistently across all datasets which benefits from a careful design that enables both enclosing and bridging link prediction. More details of how DEKG-ILP performs for enclosing links only and bridging links only are discussed in Section V-F. 2) The improvements of DEKG-ILP on FB15k-237 and NELL-995 are more obvious than those on WN18RR. This phenomenon may be caused by the different number of relations in the datasets as presented in TABLE II, where the number in FB15k-237 and NELL-995 is much larger than that in WN18RR, demonstrating that the proposed model can extract richer semantic information if the given KG contains more relations. 3) The improvement of DEKG-ILP on MB is more obvious than that on EQ and ME, as more bridging links are contained by MB. This observation combined with the first observation shows that while our model achieves performance improvements compared with existing works in predicting both enclosing links and bridging links, more improvements come from the prediction of bridging links compared with enclosing links. The underlying reason is that our model is able to do both enclosing link and bridging link prediction while the existing works can only predict enclosing links. 4) Grail performs better than other baselines on most datasets as it was specially designed for DEKGs. The subgraph reasoning method enable Grail to have a good performance on predicting enclosing links in DEKGs. However, Grail suffers from the topological limitation and cannot handle bridging link prediction task, thus it still performs worse than DEKG-ILP. 5) Although built from Grail, TACT does not perform as well as Grail, since TACT specially considers six different topological interaction of relations and achieves a good performance on relation prediction but performs poorly on head and tail prediction, thus underperforms Grail and DEKG-ILP in general. 6) RuleN achieves good results at Hits@1 on most datasets especially on ME

of WN18RR, but cannot maintain the same performance on Hits@5 and Hits@10. This is because it only focuses on whether the rule paths exist or not (i.e., 1 or 0), instead of calculating the probability of a rule path. 7) Although GEN is also an inductive method, it does not achieve good performance because GEN tries to embed unseen entities by transforming information from seen entities to unseen ones through the edges between them, which do not exist between the original KG and DEKG. Thus GEN cannot embed and predict links for the unseen entities in DEKGs effectively. 8) The rest three transductive methods TransE, RoateE, and ConvE all have a poor performance on these datasets, demonstrating that the transductive methods are not suitable for the inductive scenario, even though a more complex model is used as the encoder in these methods.

V-F Respective Study

In this section, we evaluate models for enclosing links only and bridging links only respectively. Fig. 5 presents the Hits@10 of methods on datasets EQ, MB, and ME with either enclosing links only or bridging links, where MRR, Hits@1, and Hits@5 are omitted due to the space limitation. It can be observed that: 1) DEKG-ILP consistently outperforms all baselines on both enclosing and bridging link prediction tasks across all datasets. The impressive gap between DEKG-ILP and other baselines when predicting bridging links demonstrates that our proposed model can handle the bridging link prediction task ignored by previous work, benefiting from the global semantic information extracted in CLRM. Additionally, the improvement of DEKG-ILP on enclosing links demonstrate that our model also have a better performance on predicting enclosing links compared with the baseline methods. 2) Although both TACT and Grail perform well on enclosing links, our model DEKG-ILP still performs better than them. The possible reason is that they only exploit the local topological information in KGs while DEKG-ILP also exploits global relation-based semantic information. Furthermore, the reason behind their poor performance on bridging links is that the subgraph reasoning in Grail and TACT is seriously dependent on the enclosing subgraph structure, which is missing between two disconnected KGs. 3) TransE gives very limited performance in predicting enclosing links because all entities are unseen during testing. However, it is able to predict bridging links to some extent. The possible reason is that the distance translation in embedding space proposed in TransE can capture the relevance between original KGs and DEKGs. Note that, we choose TransE as the representation of the three transductive methods and do not report the results of ConvE and RotatE since TransE achieve the best performance among the three transductive methods. 4) RuleN is able to predict enclosing links to some extent by mining logical-rules. However, it shows very limited performance in predicting bridging links because rule-mining methods seriously depend on the observed edges between entities, which are missing between original KGs and DEKGs. 5) GEN has a poor performance on both enclosing links and bridging links because it embeds unseen entities depending on the edges between seen entities and unseen ones, which do not exist in our scenario, and the final embeddings of unseen entities in GEN are close to random initialized vectors, similar to that in TransE.

(a) FB15k-237 enclosing
(b) FB15k-237 bridging
(c) NELL-995 enclosing
(d) NELL-995 bridging
(e) WN18RR enclosing
(f) WN18RR bridging
Fig. 5: Results of enclosing and bridging link prediction task respectively on Hits@10

V-G Ablation Study

In this section, we present ablation studies that validate the effectiveness of semantic-aware contrastive learning, relation-specific features in CLRM, and the improved node labeling method in GSM. (1) The variant method DEKG-ILP-R is constructed by removing the semantic score function (i.e., remove in Eq. (13)) to validate the effectiveness of relation-specific features. (2) The variant method DEKG-ILP-C is constructed by removing the contrastive learning loss function (i.e., setting the hyper-parameter in Eq. (15) to 0) to validate the effectiveness of semantic-aware contrastive learning. (3) The variant method DEKG-ILP-N is constructed by removing the improved node labeling method in GSM to validate the effectiveness of this method. Fig. 6 presents the Hits@10 results of ablation studies on EQ, MB, ME of three benchmark datasets for enclosing links and bridging links respectively.

V-G1 Dekg-Ilp-R

The relation-specific features are the key component in CLRM which extract global semantic information in KGs and inherently can be generalized from original KGs to emerging KGs. The consistent performance gap between DEKG-ILP-R and DEKG-ILP-C when predicting bridging links

emphasizes the importance of relation-specific features extracted from original KGs. This demonstrates that the global semantic features will not be restricted by the topological structure, i.e., although original KGs and DEKGs are topologically disconnected, the learned relation-based semantic information is still effective for link prediction. The improvement can be observed as well when predicting

enclosing links, demonstrating that the global semantic features also can help to generalize information from original KGs to emerging KGs to predict enclosing links. What’s more, The effectiveness of relation-specific features is more obvious on FB15k-237 and NELL-995 than on WN18RR. It may be caused by the different number of relations that vary in different datasets, demonstrating that DEKG-ILP can extract richer semantic information from the KG if it contain more relations, which align the commonsense.

V-G2 Dekg-Ilp-C

The semantic-aware contrastive learning is proposed to optimiaze the relation-specific features in CLRM, with a novel sample strategy to generate positive and negative examples for each entity during training time. The higher performance of DEKG-ILP compared with DEKG-ILP-C denotes that the proposed semantic-aware contrastive learning method can help to obtain better embeddings of the relation-specific features. This is because the novel sample strategy simulates the semantic variation of entities that we introduced in Section IV-B2 and can fully exploit the semantic information in KGs. It can be observed from the figure that the semantic-aware contrastive learning method has a more obvious improvement for DEKG-ILP on FB15k-237 MB, FB15k-237 ME, and NELL-995 ME. The possible reason is that the entities in the above three datasets have more associated triplets on average (i.e., in TABLE II), thus the novel sample strategy proposed in CLRM can generate more diverse positive and negative examples for each entity to optimize the relation-specific features.

(a) FB15k-237 EQ
(b) FB15k-237 MB
(c) FB15k-237 ME
(d) NELL-995 EQ
(e) NELL-995 MB
(f) NELL-995 ME
(g) WN18RR EQ
(h) WN18RR MB
(i) WN18RR ME
Fig. 6: Hits@10 of ablation studies on EQ, MB, ME of FB15k-237, NELL-995, and WN18RR for enclosing links and bridging links respectively

V-G3 Dekg-Ilp-N

An improved node labeling method is proposed in GSM to relieve the topological limitation in DEKGs. Different from the CLRM that tackles topological limitation by extracting features from shared relation space, GSM handles this problem by simulating disconnected nodes using the improved node labeling method. An improvement of around 2% to 3% from DEKG-ILP-N to DEKG-ILP can be observed when predicting bridging links. However, the improvement is unconspicuous when predicting enclosing links, and even backfires on WN18RR ME. This is because the subgraph reasoning method in GSM relies on the paths between the head and tail entities. Thus the preserved nodes in the improved node labeling method may become noisy data instead. In general, the improved node labeling method can be helpful when predicting bridging links, but is less effective when predicting enclosing links.

Fig. 7: The time and parameter complexity of baselines on FB15k-237 ME.

V-H Complexity Study

In this section, we analyze the time and parameter complexity of DEKG-ILP and the compared models. The experiments are conducted on Intel Xeon E5-2650 v4 CPU and single 1080Ti GPU. In Fig. 7, we report the number of parameters and average inference time for 50 links on FB15k-237 ME.

Models Epoch FB15k-237 NELL-995 WN18RR
EQ MB ME EQ MB ME EQ MB ME
T-T T-I T-T T-I T-T T-I T-T T-I T-T T-I T-T T-I T-T T-I T-T T-I T-T T-I
TransE 1000 0.02 0.011 0.02 0.011 0.03 0.012 0.01 0.011 0.02 0.011 0.03 0.012 0.01 0.011 0.02 0.011 0.03 0.012
RotatE 1000 0.02 0.015 0.02 0.015 0.03 0.016 0.02 0.015 0.02 0.015 0.03 0.016 0.01 0.015 0.03 0.015 0.04 0.015
ConvE 1000 0.05 0.060 0.09 0.061 0.16 0.060 0.06 0.061 0.07 0.061 0.14 0.061 0.02 0.060 0.05 0.060 0.10 0.061
GEN 5000 0.35 0.073 1.19 0.074 2.64 0.074 0.23 0.073 1.03 0.074 2.66 0.075 0.13 0.073 0.95 0.073 1.68 0.075
Grail 100 4.01 0.114 13.5 0.119 31.2 0.124 1.29 0.110 7.92 0.121 24.5 0.128 0.70 0.112 1.99 0.118 3.85 0.125
TACT 100 5.72 0.170 19.4 0.177 46.3 0.185 1.85 0.165 11.4 0.180 36.7 0.186 1.40 0.172 3.46 0.176 7.62 0.181
DEKG-ILP 100 4.13 0.139 14.8 0.145 32.4 0.151 1.33 0.135 8.68 0.147 25.7 0.152 0.73 0.140 2.05 0.144 3.91 0.148
TABLE IV: The training-time and inference-time of each model on all datasets. Epoch denotes the number of training epoch for each model in experiments, T-T denotes training-time (minute) of each epoch, T-I denotes the average inference-time (second) for 50 links.

The parameter complexity of TransE, RotatE, ConvE, and GEN is much higher than Grail, TACT, and DEKG-ILP because these four methods are entity-identify KGE methods where each entity corresponds to an embedding vector, while Grail, TACT, and DEKG-ILP only define learned embeddings for relations in their models. The parameter complexity of DEKG-ILP slightly increase compared with Grail but is still much lower than TACT. This is because DEKG-ILP constructs corresponding relation-specific feature for each relation, so its parameter complexity increases to compared to Grail which is , where is the number of relations, is the dimension of embedding vector and is the number of layers in GNNs model. However, TACT models the correlations between relations by considering six different topological interaction of relations thus it’s parameter complexity grows to .

Although with a smaller parameter size, the time complexity of the subgraph reasoning methods (i.e., Grail, TACT, and DEKG-ILP) are generally higher than that of entity-identify KGE methods (i.e., TransE, ConvE, RotatE, GEN). This is because the subgraph reasoning methods involve a more complex GNNs-based encoder and a subgraph extracting process using the shortest path algorithm, whose time complexity is ) [34], while TransE and RotatE directly calculate the score for each link with the entity embeddings, thus the corresponding time complexity decrease to . The time complexity of ConvE and GEN is a litte higher than that of TransE and RotatE, since they introduce a CNN and a GNN as the encoder respectively. The details of training-time and inference-time are presented in TABLE IV.

In summary, our proposed model DEKG-ILP has a significant advantage in model parameter complexity although it sacrifices some efficiency, but the overall running time is still tolerable in practice (i.e., 145ms for 50 links on average).

(a) The heat map of the enclosing link in FB15k-237
(b) The heat map of the bridging link in NELL-995
Fig. 8: The embedding heap maps of the enclosing link and bridging link

V-I Case Study

In this section, we choose an enclosing link (08720, film production_companies, 0g1rw) in FB15k-237 and a bridging link (spurs, team_play_against_team, grizzlies) in NELL-995 to visualize how the encoder in DEKG-ILP works on enclosing links and bridging links respectively. The embedding heat maps of the chosen links are shown in Fig. 8. To construct the heat maps, we firstly concatenate and resize the 32-dimensional embeddings and in in CLRM as an matrix. Then we visualize this matrix as the semantic embedding heat map in the left of Fig. 8. The same operation is performed on and in in GSM to obtain the topological embedding heat map in the right. Observed from Fig. 8(b), when predicting the bridging link, there are many active values in the semantic embedding, while most values in topological embedding are close to zero. However, as shown in Fig. 8(a), the distribution of active values is much more balanced when predicting the enclosing link. This demonstrates that the module CLRM plays a more important role than GSM when predicting bridging links, but the contribution of the two modules is similar when predicting enclosing links, enabling DEKG-ILP to have a good performance on predicting both enclosing and bridging links.

Vi Conclusion

In this paper, we extend the problem of inductive link prediction for a disconnected emerging knowledge graph to consider enclosing links and especially bridging links in DEKGs. To handle this problem, we propose a novel model entitled DEKG-ILP which contains two modules CLRM and GSM. Specifically, the Contrastive Learning-based Relation-specific Feature Modeling module CLRM is employed to exploit the global semantic features shared by original KGs and DEKGs, where a semantic-aware contrastive learning method with a novel sampling strategy is designed to optimize these features. The GNN-based Subgraph Modeling module GSM with an improved node labeling method is used to exploit the local topological information in KGs. Comprehensive experiments have been done. The results demonstrate that our proposed model not only have a better performance on predicting enclosing links, but also can handle the problem of bridging links that ignored by previous work.

Acknowledgment

This work was supported by the National Natural Science Foundation of China (No. 61902270), the Major Program of the Natural Science Foundation of Jiangsu Higher Education Institutions of China (No. 19KJA610002), Australian Research Council (Nos.FT210100624, DP190101985).

References

  • [1] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. G. Ives (2007) DBpedia: A nucleus for a web of open data. In The Semantic Web, pp. 722–735. Cited by: §I.
  • [2] J. Baek, D. B. Lee, and S. J. Hwang (2020) Learning to extrapolate knowledge: transductive few-shot out-of-graph link prediction. In NeurIPS, pp. 6–12. Cited by: TABLE I, §I, §II-B3, §V-B.
  • [3] K. D. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD, pp. 1247–1250. Cited by: §I.
  • [4] A. Bordes, N. Usunier, A. García-Durán, J. Weston, and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In NIPS, pp. 2787–2795. Cited by: TABLE I, §I, §II-A, §V-B.
  • [5] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr., and T. M. Mitchell (2010) Toward an architecture for never-ending language learning. In AAAI, pp. 1306–1313. Cited by: §I.
  • [6] J. Chen, H. He, F. Wu, and J. Wang (2021) Topology-aware correlations between relations for inductive link prediction in knowledge graphs. In AAAI, pp. 6271–6278. Cited by: TABLE I, §I, §I, §I, §II-B3, §V-B.
  • [7] G. Chu, X. Wang, C. Shi, and X. Jiang (2021) CuCo: graph representation with curriculum contrastive learning. In IJCAI, pp. 2300–2306. Cited by: §II-C.
  • [8] W. W. Cohen (2016) TensorLog: A differentiable deductive database. CoRR abs/1605.06523. Cited by: §II-B2.
  • [9] T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel (2018) Convolutional 2d knowledge graph embeddings. In AAAI, pp. 1811–1818. Cited by: TABLE I, §I, §II-A, §V-A, §V-B.
  • [10] L. A. Galárraga, C. Teflioudi, K. Hose, and F. M. Suchanek (2013) AMIE: association rule mining under incomplete evidence in ontological knowledge bases. In WWW, pp. 413–422. Cited by: §II-B2.
  • [11] T. Hamaguchi, H. Oiwa, M. Shimbo, and Y. Matsumoto (2017) Knowledge transfer for out-of-knowledge-base entities : A graph neural network approach. In IJCAI, pp. 1802–1808. Cited by: TABLE I, §I, §II-A, §II-B3.
  • [12] X. Han, S. Cao, L. Xin, Y. Lin, Z. Liu, M. Sun, and J. Li (2018) OpenKE: an open toolkit for knowledge embedding. In EMNLP, Cited by: §V-B.
  • [13] K. He, H. Fan, Y. Wu, S. Xie, and R. B. Girshick (2020) Momentum contrast for unsupervised visual representation learning. In CVPR, pp. 9726–9735. Cited by: §II-C.
  • [14] Y. He, Z. Wang, P. Zhang, Z. Tu, and Z. Ren (2020) VN network: embedding newly emerging entities with virtual neighbors. In CIKM, pp. 505–514. Cited by: §II-B3.
  • [15] R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, P. Bachman, A. Trischler, and Y. Bengio (2019)

    Learning deep representations by mutual information estimation and maximization

    .
    In ICLR, Cited by: §II-C.
  • [16] S. Hu, L. Zou, J. X. Yu, H. Wang, and D. Zhao (2018) Answering natural language questions by subgraph matching over knowledge graphs (extended abstract). In ICDE, pp. 1815–1816. Cited by: §I.
  • [17] N. Q. V. Hung, C. T. Duong, T. T. Nguyen, M. Weidlich, K. Aberer, H. Yin, and X. Zhou (2017) Argument discovery via crowdsourcing. VLDB J. 26 (4), pp. 511–535. Cited by: §I.
  • [18] M. Kaiser, R. S. Roy, and G. Weikum (2021) Reinforcement learning from reformulations in conversational question answering over knowledge graphs. In SIGIR, pp. 459–469. Cited by: §I.
  • [19] Z. Li, W. Zheng, X. Lin, Z. Zhao, Z. Wang, Y. Wang, X. Jian, L. Chen, Q. Yan, and T. Mao (2020) TransN: heterogeneous network representation learning by translating node embeddings. In ICDE, pp. 589–600. Cited by: §II-A.
  • [20] C. Meilicke, M. Fink, Y. Wang, D. Ruffinelli, R. Gemulla, and H. Stuckenschmidt (2018) Fine-grained evaluation of rule- and embedding-based systems for knowledge graph completion. In ISWC, pp. 3–20. Cited by: TABLE I, §II-B2, §V-B.
  • [21] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NIPS, pp. 3111–3119. Cited by: §II-C.
  • [22] S. Muggleton (1991) Inductive logic programming. New Gener. Comput. 8 (4), pp. 295–318. Cited by: §II-B2.
  • [23] D. Q. Nguyen, T. D. Nguyen, D. Q. Nguyen, and D. Q. Phung (2018)

    A novel embedding model for knowledge base completion based on convolutional neural network

    .
    In NAACL-HLT, pp. 327–333. Cited by: §II-A.
  • [24] T. T. Nguyen, M. Weidlich, D. C. Thang, H. Yin, and N. Q. V. Hung (2017) Retaining data from streams of social platforms with minimal regret. In IJCAI, C. Sierra (Ed.), pp. 2850–2856. Cited by: §I.
  • [25] M. Nickel, V. Tresp, and H. Kriegel (2011) A three-way model for collective learning on multi-relational data. In ICML, pp. 809–816. Cited by: §II-A.
  • [26] P. G. Omran, K. Wang, and Z. Wang (2021) An embedding-based approach to rule learning in knowledge graphs. TKDE 33 (4), pp. 1348–1359. Cited by: §II-B2.
  • [27] J. Qiu, Q. Chen, Y. Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang (2020) GCC: graph contrastive coding for graph neural network pre-training. In SIGKDD, pp. 1150–1160. Cited by: §II-C, §IV-B2.
  • [28] A. Sadeghian, M. Armandpour, P. Ding, and D. Z. Wang (2019) DRUM: end-to-end differentiable rule mining on knowledge graphs.. In NeurIPS, pp. 15321–15331. Cited by: §II-B2.
  • [29] M. S. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling (2018) Modeling relational data with graph convolutional networks. In ESWC, pp. 593–607. Cited by: §I, §II-A, §IV-C3.
  • [30] C. Shang, Y. Tang, J. Huang, J. Bi, X. He, and B. Zhou (2019) End-to-end structure-aware convolutional networks for knowledge base completion. In AAAI, pp. 3060–3067. Cited by: §II-A.
  • [31] B. Shi and T. Weninger (2018) Open-world knowledge graph completion. In AAAI, pp. 1957–1964. Cited by: §I.
  • [32] M. Sun, J. Xing, H. Wang, B. Chen, and J. Zhou (2021) MoCL: data-driven molecular fingerprint via knowledge-aware contrastive learning from molecular graph. In SIGKDD, pp. 3585–3594. Cited by: §II-C, §IV-B2.
  • [33] Z. Sun, Z. Deng, J. Nie, and J. Tang (2019) RotatE: knowledge graph embedding by relational rotation in complex space. In ICLR, Cited by: TABLE I, §II-A, §V-B.
  • [34] K. K. Teru, E. Denis, and W. Hamilton (2020) Inductive relation prediction by subgraph reasoning. In ICML, pp. 9448–9457. Cited by: TABLE I, §I, §I, §I, §II-B3, §IV-C1, §IV-C3, §IV-C, §V-A, §V-B, §V-H.
  • [35] K. Toutanova and D. Chen (2015) Observed versus latent features for knowledge base and text inference. In CVSC, pp. 57–66. Cited by: §V-A.
  • [36] T. Trouillon, J. Welbl, S. Riedel, É. Gaussier, and G. Bouchard (2016) Complex embeddings for simple link prediction. In ICML, pp. 2071–2080. Cited by: §II-A.
  • [37] P. Velickovic, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm (2019) Deep graph infomax. In ICLR, Cited by: §II-C.
  • [38] P. Wang, J. Han, C. Li, and R. Pan (2019) Logic attention based neighborhood aggregation for inductive knowledge graph embedding. In AAAI, pp. 7152–7159. Cited by: §I, §II-B3.
  • [39] Q. Wang, H. Yin, W. Wang, Z. Huang, G. Guo, and Q. V. H. Nguyen (2019) Multi-hop path queries over knowledge graphs with neural memory networks. In DASFAA, Lecture Notes in Computer Science, Vol. 11446, pp. 777–794. Cited by: §I.
  • [40] Q. Wang, Z. Mao, B. Wang, and L. Guo (2017) Knowledge graph embedding: A survey of approaches and applications. TKDE 29 (12), pp. 2724–2743. Cited by: §I, §I, §I, §II-A.
  • [41] Y. Wang, A. Khan, T. Wu, J. Jin, and H. Yan (2020) Semantic guided and response times bounded top-k similarity search over knowledge graphs. In ICDE, pp. 445–456. Cited by: §I.
  • [42] Z. Wang, J. Zhang, J. Feng, and Z. Chen (2014)

    Knowledge graph embedding by translating on hyperplanes

    .
    In AAAI, pp. 1112–1119. Cited by: §II-A.
  • [43] R. Xie, Z. Liu, H. Luan, and M. Sun (2017) Image-embodied knowledge representation learning. In IJCAI, pp. 3140–3146. Cited by: §II-B1.
  • [44] R. Xie, Z. Liu, and M. Sun (2016) Representation learning of knowledge graphs with hierarchical types. In IJCAI, pp. 2965–2971. Cited by: §II-B1.
  • [45] W. Xiong, T. Hoang, and W. Y. Wang (2017) DeepPath: A reinforcement learning method for knowledge graph reasoning. In EMNLP, pp. 564–573. Cited by: §V-A.
  • [46] B. Yang, W. Yih, X. He, J. Gao, and L. Deng (2015) Embedding entities and relations for learning and inference in knowledge bases. In ICLR, pp. 1–12. Cited by: §II-A, §IV-B1.
  • [47] F. Yang, Z. Yang, and W. W. Cohen (2017) Differentiable learning of logical rules for knowledge base reasoning. In NIPS, pp. 2319–2328. Cited by: TABLE I, §II-B2.
  • [48] Z. Yang (2020) Biomedical information retrieval incorporating knowledge graph for explainable precision medicine. In SIGIR, J. Huang, Y. Chang, X. Cheng, J. Kamps, V. Murdock, J. Wen, and Y. Liu (Eds.), pp. 2486. Cited by: §I.
  • [49] J. Yu, H. Yin, J. Li, M. Gao, Z. Huang, and L. Cui (2020) Enhance social recommendation with adversarial graph convolutional networks. TKDE. Cited by: §I.
  • [50] J. Zeng and P. Xie (2021)

    Contrastive self-supervised learning for graph classification

    .
    In AAAI, pp. 10824–10832. Cited by: §II-C.
  • [51] Z. Zhang, J. Cai, Y. Zhang, and J. Wang (2020) Learning hierarchy-aware knowledge graph embeddings for link prediction. In AAAI, pp. 3065–3072. Cited by: §II-A.
  • [52] K. Zhao, Y. Zhang, H. Yin, J. Wang, K. Zheng, X. Zhou, and C. Xing (2020) Discovering subsequence patterns for next POI recommendation. In IJCAI, pp. 3216–3222. Cited by: §I.
  • [53] M. Zhou, Z. Li, and P. Xie (2021) Self-supervised regularization for text classification. Trans. Assoc. Comput. Linguistics 9, pp. 641–656. Cited by: §II-C.