Heterogeneous Relational Reasoning in Knowledge Graphs with Reinforcement Learning

03/12/2020
by   Mandana Saebi, et al.
University of Notre Dame
0

Path-based relational reasoning over knowledge graphs has become increasingly popular due to a variety of downstream applications such as question answering in dialogue systems, fact prediction, and recommender systems. In recent years, reinforcement learning (RL) has provided solutions that are more interpretable and explainable than other deep learning models. However, these solutions still face several challenges, including large action space for the RL agent and accurate representation of entity neighborhood structure. We address these problems by introducing a type-enhanced RL agent that uses the local neighborhood information for efficient path-based reasoning over knowledge graphs. Our solution uses graph neural network (GNN) for encoding the neighborhood information and utilizes entity types to prune the action space. Experiments on real-world dataset show that our method outperforms state-of-the-art RL methods and discovers more novel paths during the training procedure.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

06/12/2019

Reinforcement Knowledge Graph Reasoning for Explainable Recommendation

Recent advances in personalized recommendation have sparked great intere...
11/16/2019

Inductive Relation Prediction on Knowledge Graphs

Inferring missing edges in multi-relational knowledge graphs is a fundam...
07/20/2017

DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning

We study the problem of learning to reason in large scale knowledge grap...
03/07/2020

Knowledge Graphs and Knowledge Networks: The Story in Brief

Knowledge Graphs (KGs) represent real-world noisy raw information in a s...
02/19/2020

Error detection in Knowledge Graphs: Path Ranking, Embeddings or both?

This paper attempts to compare and combine different approaches for de-t...
01/02/2020

Reasoning on Knowledge Graphs with Debate Dynamics

We propose a novel method for automatic reasoning on knowledge graphs ba...
10/07/2021

Inferring Substitutable and Complementary Products with Knowledge-Aware Path Reasoning based on Dynamic Policy Network

Inferring the substitutable and complementary products for a given produ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Relational reasoning has been long one of the most desirable goals of machine learning and artificial intelligence 

[10, 16, 8, 32]. In the context of large-scale knowledge graphs (KG), relational reasoning addresses a number of important applications, such as question answering [30, 3], dialogue systems [20, 18], and recommender systems [38, 15, 2]. Most KGs are incomplete, so the problem of inferring missing relations, or KG completion, has become an increasingly popular research domain. Several works view this as a link prediction problem and attempt to solve it using network embedding and deep learning approaches [1, 26, 27, 7, 33, 4]

. These methods embed the KG into a vector space and use a similarity measure to identify the entities that are likely to be connected. However, they are unable to discover the reasoning paths, which are important for interpreting the model. Furthermore, they do not provide an explicit explanation during the learning process and often rely on other analytical methods to provide an interpretation for their predictions. As a result, it is often hard to trust the predictions made by embedding-based methods. Recent advances in the area of deep reinforcement learning (DRL) have inspired reinforcement learning (RL) based solutions for the KG completion problem 

[21, 3, 30, 13, 19, 22, 14, 29]. RL-based methods formulate the task of KG completion as a sequential decision-making process in which the goal is to train an RL agent to walk over the graph by taking a sequence of actions (i.e., choosing the next entity) that connects the source to the target entity. The sequences of entities and relations can be directly used as a logical reasoning path for interpreting model predictions. For example, in order to answer the query (Reggie Miller, plays sport, ?), the agent may find the following reasoning path in the KG: Reggie Miller Michael Jordan Basketball. In this case Reggie Miller, Michael Jordan, and Basketball are all nodes (entities) in the KG, and competes with and plays sport are edges (relations). The agent is thus learning to navigate the nodes and edges of the KG.

These RL solutions demonstrate competitive accuracy with other deep learning methods while boasting improved interpretability of the reasoning process. However, there are some fundamental and open issues that we address in this work:

Large action space. In KGs, facts are represented as binary relations between entities. Real-world KGs contain huge numbers of entities and relations. As a result, the RL agent often encounters nodes with a high out-degree, which increases the complexity of choosing the next action. In these cases, exploring the possible paths to determine the optimal action is computationally expensive, and in many cases beyond the memory limits of a single GPU. Previous studies have shown that type of information can improve the KG completion performance [24, 11] using deep learning approaches. To improve the search efficiency, we first propose a representation for the entity type information, which we include in the representation of the state space. We then prune the action space based on the type information. This guides the RL agent to limit the search to the entities whose type best matches the previously taken actions and, as a result, avoid incorrect reasoning paths. In the above example reasoning path for query (Reggie Miller, plays sport, ?), suppose the entity Michael Jordan has a high out-degree, and is connected to several other entities through different relations (e.g., Michael Jordan NBA Champion,
Michael Jordan New York City, Michael Jordan Juanita Vanoy). None of these additional entities are useful for answering the query and may mislead the agent. However, we demonstrate that an agent can learn that next entity’s type is most likely a sport rather than a person or a location.

Accurate representation of entity neighborhood. Existing RL-based methods for KG completion do not capture the entity’s neighborhood information. Previous studies on one-shot fact prediction have shown that the local neighborhood structure improves the fact prediction performance for long-tailed relations [31, 36]. We propose a graph neural network (GNN) [9] to encode the neighborhood information of the entities and leverage the state representation with the type and neighborhood information of the entities. We demonstrate that learning the local heterogeneous neighborhood information improves the performance of RL agent on the long-tailed relations, which in turn significantly improves performance for the KG completion task.

Our contributions include:

  1. Designing an efficient vector representation of entity type-embeddings.

  2. Pruning the action space and improving the choice of next actions using the entity type information.

  3. Proposing a GNN for incorporating the local neighborhood information in the state representation.

The rest of the paper is organized as follows. First, in Section 2 we survey related work. Next, in Section 3 we present the details of our model, and in Section 4 present and discuss experimental results. Finally, we conclude in Section 5 and discuss opportunities for continued work.

2 Related Work

Relational reasoning over knowledge graphs has attracted significant attention over the past few years. Recent works [1, 26, 27, 7, 33, 4, 35] approached this problem by embedding the relation and entities into a vector space and identifying related entities by similarity in the vector space. However, these methods have some important drawbacks, including:

  1. They cannot perform multi-hop reasoning. That is, they only consider pairwise relationships and cannot reason along a path.

  2. They cannot explain the reasoning behind their predictions. Because they treat the task as a link prediction problem, the output of their prediction is binary.

With the recent success of deep reinforcement learning in AlphaGO [25] researchers began to adopt RL to solve a variety of problems that were conventionally addressed by deep learning methods, such as ad recommendation [38, 15, 2], dialogue systems [20, 18], and question answering [30, 3]. As a result, more recent methods proposed using RL to solve the multi-hop reasoning problem in knowledge graphs by framing it as a sequential decision-making process  [3, 23, 30, 22, 12, 13]. Deeppath [30] was the first method that used RL to find relation paths between two entities in KGs. It walks from the source entity, chooses a relation and translates to any entity in the tail entity set of the relation. MINERVA [3], on the other hand, learns to do multi-hop reasoning by jointly selecting an entity-relation pair via a policy network. MARLPaR  [12] uses a multi-agent RL approach where two agents are used to perform the relation selection and entity selection iteratively. Lin et al.  [13] implement reward shaping to address the problem of the sparse reward signal and action dropout to reduce the effect of incorrect paths. Xian et al. [29] use KG reasoning for recommender systems and designed both a multi-hop scoring function and a user-conditioned action pruning strategy to improve the efficiency of RL-based recommendation.

Because these RL models treat the KG completion problem as a path reasoning problem instead of a link prediction problem, they are able to overcome both drawbacks of embedding methods that are outlined above. However, the RL models have drawbacks of their own, the most notable of which are computational cost and predictive accuracy. Many of these RL methods have tried to combine the representational power of embeddings and reasoning power of RL by training an agent to navigate an embedding space. For example, the authors of [13] build an agent-based model on top of pre-trained embeddings generated by ComplEx[27] or ConvE [4]. While we take a similar modular approach, our solution enriches the embedding space with additional information about entity types and local neighborhood information. In light of recent work on heterogeneous networks that have demonstrated the importance of heterogeneous information [5, 37, 24, 11] and local neighborhood information [31, 36] in graph mining, we take a broader approach. We propose to include entity type information in the state representation to help the RL agent to take more informed actions considering the heterogeneous context. We also learn the heterogeneous neighborhood information simultaneously with training the RL agent to improve the prediction on less frequent relations.

3 Model

In this section, we formally define the problem of relational reasoning in a KG and explain our RL solution. We then introduce our contribution by incorporating type-embeddings and heterogeneous neighbor-encoder. An overview of the model is displayed in Figure 1.

Figure 1: Model overview. The type embeddings are first created by max/mean pooling on the entities with a similar type. The type embeddings are then concatenated with the entity embeddings to create the type-enhanced embeddings, which are then passed to the neighbor encoder to create the final entity representation.

3.1 Problem Formulation

Knowledge graphs consist of facts represented as triples. We formally define a knowledge graph where is a set of entities and is a set of relations. Given a query , is called the source entity and is the query relation. Our goal is to predict the target entity . In most cases, the output of each query is a list of candidate entities, for some fixed

, ranked in descending order by probability. The prediction can be represented as a function

.

In KGs, we are not only interested in accurate prediction of the target entity , but also understanding the reasoning path the model uses to predict . This is a key advantage that RL methods offer over embedding-based models, which can make entity predictions but cannot give interpretable justification for them. Rather than treating the task as a form of link prediction, RL models instead train an agent to traverse the nodes of a KG via logical reasoning paths. Below we provide the details of RL formulations of the problem.

3.2 A Reinforcement Learning Solution

Similar to  [3, 13, 30]

, we formulate this problem as a Markov Decision Process (MDP), in which the goal is to train a policy gradient agent (using REINFORCE 

[28]) to learn an optimal reasoning path to answer a given query . We express the RL framework as a set of states, actions, rewards, and transitions.

States. The state at time is represented as tuple , where is the input query, is the entity at which the agent is located at time and is the history of the entities and relations traversed by agent until time . The agent begins at the source entity with initial state . We refer to the terminal state as , where is the agent’s answer to the input query and is the full reasoning path. Each entity and relation is represented by an embedding vector for some constant . In our solution, we enrich the state representation with entity type and neighborhood information, which is explain later in Section 4.

Actions. At each time-step, the action is to select an edge (and move to the connecting entity) or stay at the current entity. The action space given state is thus set of all immediate neighbors of the current node , and the node itself, i.e., , where is the set of all neighbors from node . The inclusion of the current node in the action space represents the agent’s decision to terminate, and so select as its answer to the input query. Since the graph is directed, only includes nodes adjacent on out-edges. Following previous work [1, 3, 30], for each edge (triple) , during preprocessing we add an inverse edge in order to facilitate graph traversal.

Most real-world knowledge graphs are scale-free, meaning that a small percentage number of entities have a high out-degree while the majority of the entities have a small out-degree. However, the entities with high out-degree are crucial to query answering. For performance reasons, many RL models are forced to cap the size of the action space and do so via a pre-computed heuristic. For example,

[13] pre-computes PageRank scores for each node, and narrows the action space to a fixed number of highest-ranking neighbors. In this work, we use entity type information to limit the search to the entities with best-matching types, given the previous actions. We call the reduced action space . We provide more details in Section 3.3.

Rewards. The agent evaluates each action and chooses the one that will maximize a reward. Previous works [30, 3] define only a terminal reward of if the agent reaches the correct answer. However, since knowledge graphs are incomplete, a binary reward cannot model the potentially missing facts. As a result, the agent receives low-quality rewards as it explores the environment. Inspired by  [13], we use pre-trained KG embeddings based on existing KG embedding methods to design a soft reward function for the terminal state based on [17]:

(1)

Where is a similarity measure calculated based on pre-trained KG embedding approach [13]. We use different embedding methods depending on different datasets. More details is provided in Section 4.

Transitions. At state , the agent choses an action based on a policy where and are the sets of all possible states and and actions, respectively. Following [13], we use an to encode the history of the past steps taken by the agent in solving the query. The history embedding for is represented as:

(2)

We define the policy network with weight parameters as follows:

(3)

The transition to a new state is thus given by:

(4)

To reduce the potential impact of leading to the overuse of incorrect paths, we utilize random action dropout as described in [13].

The policy network is trained using stochastic gradient descent to maximize the expected reward:

(5)

3.3 Entity Types

Many knowledge graphs contain rich semantic information as entity type that can be used as prior information to guide the agent through the reasoning process. We argue that the type information can be helpful for reducing the action space, especially for nodes with a high out-degree. Entity type information can be used to limit the search only to the entities that are best matching the previously visited entities and actions. To achieve this, we measure the similarity of all possible actions given the entity type embedding of current and possible target entities and only keep the top candidates. In order to build the entity type representation , we propose to aggregate the vector representation of the entities with a similar type. Below we propose two simple mechanisms for doing so:

  1. Take the average of the embedding vectors for all the entities that share the same entity type (mean-pooling).

  2. Take the maximum value of each element in the the embedding vectors for all the entities

    that share the same entity type (max-pooling).

To measure the similarity of the current entity with the candidate actions we use the cosine similarity of two entities with respect to their type-enhanced embedding vectors:

(6)

where is the dot product operation, represents the vector concatenation operator, , are -dimensional vector representations of the entity and relation , and is a bias term for . We call the type-enhanced entity representation. We then rank the possible actions and prioritizes the ones that are more likely to result in a correct answer based on the score. We thus create a pruned action space by keeping the nodes with the highest value of .

3.4 Heterogeneous Neighbor Encoder

After generating the type embeddings, we feed the type-enhanced embeddings together with the relation embeddings to the heterogeneous neighbor encoder to generate the enriched entity representation. Although many works [1, 34] have been proposed to learn entity embeddings using relational information, recent studies [31, 36] have demonstrated that explicitly encoding graph local structure can benefit entity embedding learning in knowledge graph. Inspired by this motivation, we propose a heterogeneous neighbor encoder to learn the enriched entity embedding by aggregating entity’s neighbors information. Specifically, we denote the set of relational neighbors () of given entity as , where is the background knowledge graph, and represent the -th relation and corresponding neighboring entity of , respectively. The heterogeneous neighbor encoder should be able to encode and output a feature representation of by considering different relational neighbors . To achieve this goal, we formulate the enriched entity embedding as follows:

(7)

where denotes activation unit (we use Tanh), represents concatenation operator, , are type-enhanced entity embeddings of and . Besides, , and (: pre-trained embedding dimension) are parameters of neighbor encoder.

4 Experiments

In this section, we describe and discuss the experimental results of our proposed approach. We compare against several baseline methods: ConvE (embedding-based) [4], ComplEx (embedding-based) [27], MINERVA (agent-based) [3], and MultiHopKG (agent-based) using both ConvE and ComplEx for pre-trained embeddings [13].

4.1 Data & Metrics

max width=1center Dataset #Types #Facts #Queries #Queries Discarded NELL-995 75,492 268 200 154,213 3,992 1,152 Amazon Beauty 16,345 5 7 52,516 1,325 174 Amazon Cellphones 13,837 5 7 31,034 951 205

Table 1: Description of the data sets used for our experiments. describes the total number of nodes in the KG, Types describes the number of entity types, describes the number of edge types, and Facts describes the total number of edges. Queries is the test set, a subset of Facts which are removed from the KG for testing. Queries Discarded is the subset of Queries for which at least one of the entities does not appear in the training set.

The experiments utilize the 3 data sets presented in Table 1. Of the standard data sets used in KG reasoning tasks, NELL-995 is the only one that explicitly encodes entity types. Therefore, in addition to NELL, we incorporated two datasets from the Amazon e-commerce collection [6]. Each Amazon data contains a set of users, products, brands, and other information, which the authors of [29] use to make product recommendations to users. This task is a specialized instance of KG completion that only focuses on user-product relations, so we do not include it in our baseline results. Additionally, we found these data sets were too large for efficient computation in the broader KG completion task, so we shrunk them for our experiments. To do this, we randomly chose 20% of the nodes and induced a subgraph on those nodes. While this might result in a sparser graph that makes predictions more difficult, this was the best option given the lack of other relevant data containing type information.

The full KG is represented by the number of in Table 1. Before training, we partition into a training set and a test set, which we call . In NELL-995, this split already exists as part of the standard data set. For the Amazon sets, we populated with 10% of the triples in , chosen at random. Each model is then trained on the set , and tested on the set . Recall that each fact is a triple in the form . Each triple is presented to the model in the form , and, as described in Section 3.2, the model outputs a list of ranked candidate entities . Also recall that we describe the prediction as a function .

We measure performance for each experiment with standard KG completion metrics, namely, Hits@k for k={1,3,5,10}, and Mean Reciprocal Rank (MRR). Hits@k is measured as the percentage of test cases in which the correct entity appears in the top candidates in , i.e.:

(8)

where and is a function that returns the position of entity in the set of ordered predictions . MRR is a related metric, defined as the multiplicative inverse of the rank of the correct answer, i.e.:

(9)

Because none of these models generalize to unknown entities, followed by previous work [3, 13], we measure Hits@k and MRR only for queries for which both and have already been seen at least once by the model during training. In other words, if either of the query entities is missing from the training set , we discard it from testing. Additionally, we reserve a small portion of the

as a development set to estimate performance during training.

4.2 Parameter Selection

For NELL-995, utilize the same hyperparameters described in

[13] when training ConvE, ComplEx, Distmult, and Lin et al [13] baselines. For MINERVA, we utilize the same hyperparameters described in [3]

and train the model for 3,000 epochs. For the two Amazon datasets, we perform a grid search for our method and all baselines and report the best performance for each. For all datasets, we train the KG embedding models (ConvE and ComplEx) for 1000 epochs each. These embeddings are then used to make predictions directly but also serve as pre-trained inputs for the RL agent, which we train for 30 epochs per experiment for all datasets. We tried different embedding methods for the pre-trained embeddings and eventually used ComplEx for NELL-995 and Distmult for Amazon cellphones and Amazon Beauty as they resulted in the best performance in each data.

max width=1.5center Data Set NELL-995 Amazon Beauty Amazon Cellphones Metric (%) @1 @5 @10 MRR @1 @5 @10 MRR @1 @5 @10 MRR ConvE 68.2 85.4 88.6 76.1 25.8 44.9 55.3 35.2 16.9 35.0 44.8 25.7 ComplEx 63.0 82.2 86.0 71.8 27.6 48.0 58.0 37.5 17.4 36.1 45.2 26.6 DistMult 65.1 83.5 85.7 73.4 26.7 48.8 58.1 37.1 18.8 39.7 48.4 28.8 Lin et. al (2018) 65.6 80.4 84.4 72.7 20.6 33.6 39.5 27.1 12.2 22.2 27.6 17.5 MINERVA 59.8 79.5 82.1 68.9 17.5 29.3 38.2 24.3 6.8 12.3 22.7 11.6 Ours (-T) 66.9 81.9 85.2 74.1 21.2 34.6 40.5 27.9 12.6 22.9 28.1 17.9 Ours (-N) 67.1 81.8 85.1 73.1 20.7 33.8 39.6 27.2 12.3 22.4 27.9 17.6 Ours (TN) 68.9 83.2 86.7 74.8 21.8 35.1 40.7 28.2 12.9 23.1 28.5 18.2

Table 2: Experimental results on the NELL-995, Amazon Beauty, and Amazon Cellphones data sets. @{1, 3, 5, 10} and MRR are standard KG completion metrics and are described in Section 4.1. The methods are separated into embedding-based (ConvE, ComplEx, and DistMult) and agent-based (MINERVA, Lin et al., and Ours (TN)) groups. Bolded numbers indicate the best-performing method from each group, with respect to the given metric.

4.3 Experimental Results

Our experimental results are described in Table 2. For NELL-995 data, We quote the results reported in [3, 13]. Embedding-based methods show an overall higher performance compared to the RL-based methods. We can see that in all three datasets, our results outperform both RL baselines (Lin et al. [13] and MINERVA [3]). Amazon datasets, on the other hand, are far more challenging. We notice that even the embedding based methods are struggling with low performance on these datasets. On the Amazon data, the performance of all methods is significantly lower. Our method results in a 4% improvement in MRR (and 5.43% in Hits@1) over the best RL baseline on Amazon Cellphones and a 3.9% improvement in MRR (and 5.5% in Hits@1) on Amazon Beauty. On the NELL-995 dataset, our method results in 2.8% improvement in MRR and 4% improvement in Hits@1 over the best performing baseline. We also performed ablations studies to analyze the effect of each module in our model. We removed the type embeddings in Ours (-T) and the heterogeneous neighbor encoder in Ours (-N). We notice that removing the heterogeneous neighbor encoder results in a higher drop in performance in the Amazon datasets. This gap is quite smaller in the NELL-955 data.

Our results show that pruning the action space based on the entity type information results in a larger boost in performance on the Amazon datasets. We believe due to the sparsity of these two knowledge graphs, type information was more effective for action space pruning than entity page ranks, as done in [13]. Note that, there are only 5 entity types in the Amazon datasets. As a result, the number of entities that will be discarded (due to type mismatch) is higher which assists the agent to discover a better path. We generated the type embeddings using max-pooling for the NELL-995 dataset and mean-pooling for both Amazon datasets.

4.4 Path diversity and convergence

We compare the number of unique paths discovered from the development set during the training procedure. Figure 2 shows that path diversity (top row) improves across all models as the model performance (bottom row) improves. For this analysis, we compare our ablation models (Ours (-N) and Ours (-T)) with the best performing RL baseline by Lin et al. [13]. Our method is more successful in discovering novel paths and obtains a better hit ratio on the development set. On the Amazon Beauty data, the number of unique paths discovered by Ours (-T) is higher than both combined (Ours) while in Amazon Cellphones the combined model performs better, but similar to Amazon Beauty, Ours (-N)performs better than Ours (-T). NELL-955 shows a different trend where removing the type information results in a larger drop in the number of unique paths, compared to the heterogeneous neighbor encoder. This is intuitive, since NELL-955 contains far more entity type than Amazon datasets, and inclusion of type information may be a positive factor for discovering new paths. In terms of convergence, Amazon Beauty and Amazon Cellphones show a similar trend, and removing the type information significantly reduces the hit ratio. This gap is smaller for NELL-995 data, though our model still shows improvement in hit ratio on this dataset.

Figure 2: Performance of our model on the development set at different training epochs. The top figures show the number of unique paths visited during each epoch. The bottom rows show the hit ratio for the entire development set.

max width=1.5center Data Set Seen Queries Unseen Queries % Ours (TN) Ours (-T) Ours (-N) Lin et al. % Ours (TN) Ours (-T) Ours (-N) Lin et al. NELL-995 15.3 53.3 53.2 (0) 51.1 (-5) 51.4 (-4) 84.7 88.6 87.4 (-1) 86.2 (-3) 85.5 (-3) Amazon-Beauty 89.6 25.5 23.8 (-7) 21.7 (-15) 23.9 (-6) 10.4 39.1 39.2(0) 33.5 (-14) 36.6 (-7) Amazon-Cellphone 87.9 20.2 18.1 (-10) 15.6 (-23) 20.2 (-9) 12.1 25.3 25.3 (0) 23.1 (-8) 22.8 (-10)

Table 3: MRR on three datasets for queries of the development set that are seen / unseen in the training set. The percentage of seen / unseen queries in the development set for each data is shown in column %.

max width=1.5center Data Set To-Many To-One % Ours (TN) Ours (-T) Ours (-N) Lin et al. % Ours (TN) Ours (-T) Ours (-N) Lin et al. NELL-995 12.9 57.4 56.9 (0) 55.2 (0) 55.7 (0) 87.1 82.8 82.0 (0) 81.0 (-1) 81.4 (-1) Amazon-Beauty 89.6 25.5 23.8 (-7) 21.7 (-15) 24.2 (-5) 10.4 39.1 39.2 (0) 33.5 (-14) 36.8 (-6) Amazon-Cellphone 95.5 17.8 15.9 (-11) 13.6 (-24) 16.5 (-7) 4.5 83.4 83.4 (0) 66.6 (-20) 78.2 (-6)

Table 4: MRR on three datasets for different relation types. The percentage of one-to-one and one-to-many relations in the development set for each data is shown in the % column.

4.5 Performance on seen and unseen queries

We compare the ablation models along with the best RL baseline performance on seen and unseen queries. Note that, percentage of unseen queries is much lower in the Amazon datasets compared to the NELL-995 dataset. Table 4 shows that our proposed method performs better on both seen and unseen queries. In particular, we notice that removing the neighbor encoder in Amazon Beauty and Amazon Cellphone results in a significant performance drop on unseen queries, while removing the type information had little or no effect. We observe a similar trend on the seen queries. Although they show a performance drop after removing the type information, the effect is less than removing the neighbor encoder.

4.6 Performance on different relations

We evaluate our proposed model on different relation types and compare our results with the best performing RL baseline. We take a similar approach as [13] to extract to-many and to-one relations. A relation is considered to-many if queries containing relation can have more than 1 correct answer, otherwise, it is considered a to-one relation. Table 4 shows the MRR values on the development set for all three datasets. We notice that most relations in the Amazon datasets are to-many, as these graphs are denser. On the other hand, a large portion of the NELL-995 data consists of to-one relations. Overall, to-many relations show lower performance, regardless of the model. Our proposed model consistently shows a better performance than Lin et al., except for the NELL-995 dataset where the improvement is marginal. Both to-one and to-many are more sensitive to removing the neighbor-encoder rather than removing the type information. However, for to-one relations MRR does not drop in any dataset after removing the type information.

4.7 Case study

In this section we present a few case studies that show the strength of our proposed method. In the NELL-995 dataset, our method is more successful when the agent encounters an entity with a high out-degree. As an example, for the query (Buffalo Bills (sports team) , organization hired person , ?) our method discovers the path: Buffalo Bills (sports team) [D:26] Mike Mularkey (coach) [D:5] NFL (sports league) [D:315] Dick Jauron (coach) [D:6], in which D is equivalent to the entity out-degree. While other baselines also find the partial path Buffalo Bills (sports team) [D:26] Mike Mularkey (Coach) [D:5] NFL (sports team) [D:315], they are not able to navigate properly after reaching the NFL entity, due to its high out-degree. As a result, they are unable to discover the answer Mike Mularkey. As another example, we consider the query: (New York (City), organization hired person ,?). Our method discovers the path: New York (City) [D:314] Lincoln Center (attraction) [D:5] NYC metropolitan area (island) [D:80] Michael Bloomberg (politician) [D:4]. Again, other RL baselines struggle with finding the next best step after entity New York. Our method uses the location information to find the answer Michael Bloomberg. In the Amazon datasets, there are fewer entity and relation types. As a result, we observe many frequent patterns that all RL baselines are able to discover. Therefore, we focus on the diversity of the relations used in our method and the best performing baseline [13] for the discovered paths in the development set. Figure 3 displays the inference results. On the Amazon cellphones data, our method discovers fewer null, produced-by and also-bought relations while it utilizes more of other relations, in particular, belongs-to relations. Similarly, on the Amazon Beauty data, our method utilizes fewer also-bought, produced-by and null relations, while it uses other relations more frequently, especially the bought-together relation. We believe one reason for the success of our method is the diverse use of different relation types for discovering new path types.

Figure 3: Relation frequencies in the discovered paths of the development set for the Amazon dataset. Note the log scale for the y-axis.

5 Conclusion

We proposed a framework for improving the performance of path-based reasoning using reinforcement learning. Our results show that incorporating the heterogeneous context and the local neighborhood information results in a better performance for the query answering task. Our analysis shows that the type information is important for faster convergence and finding more diverse paths, and the neighborhood information improves the performance on unseen queries. In the future, we plan to explore more efficient strategies for action-space pruning to improve the scalability of existing RL solutions. Furthermore, we plan to develop more effective type embeddings considering the hierarchical structure of the type information.

References

  • [1] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pp. 2787–2795. Cited by: §1, §2, §3.2, §3.4.
  • [2] M. Chen, A. Beutel, P. Covington, S. Jain, F. Belletti, and E. H. Chi (2019) Top-k off-policy correction for a reinforce recommender system. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 456–464. Cited by: §1, §2.
  • [3] R. Das, S. Dhuliawala, M. Zaheer, L. Vilnis, I. Durugkar, A. Krishnamurthy, A. Smola, and A. McCallum (2017) Go for a walk and arrive at the answer: reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint arXiv:1711.05851. Cited by: §1, §2, §3.2, §3.2, §3.2, §4.1, §4.2, §4.3, §4.
  • [4] T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel (2018) Convolutional 2d knowledge graph embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §1, §2, §2, §4.
  • [5] Y. Dong, N. V. Chawla, and A. Swami (2017) Metapath2vec: scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 135–144. Cited by: §2.
  • [6] R. He and J. McAuley (2016) Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pp. 507–517. Cited by: §4.1.
  • [7] Y. Jia, Y. Wang, X. Jin, H. Lin, and X. Cheng (2018) Knowledge graph embedding: a locally and temporally adaptive translation-based approach. ACM Transactions on the Web (TWEB) 12 (2), pp. 8. Cited by: §1, §2.
  • [8] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda (2006) Learning systems of concepts with an infinite relational model. In AAAI, Vol. 3, pp. 5. Cited by: §1.
  • [9] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §1.
  • [10] D. Koller, N. Friedman, S. Džeroski, C. Sutton, A. McCallum, A. Pfeffer, P. Abbeel, M. Wong, D. Heckerman, C. Meek, et al. (2007) Introduction to statistical relational learning. MIT press. Cited by: §1.
  • [11] K. Lei, J. Zhang, Y. Xie, D. Wen, D. Chen, M. Yang, and Y. Shen (2019) Path-based reasoning with constrained type attention for knowledge graph completion. Neural Computing and Applications, pp. 1–10. Cited by: §1, §2.
  • [12] Z. Li, X. Jin, S. Guan, Y. Wang, and X. Cheng (2018) Path reasoning over knowledge graph: a multi-agent and reinforcement learning based method. In 2018 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 929–936. Cited by: §2.
  • [13] X. V. Lin, R. Socher, and C. Xiong (2018) Multi-hop knowledge graph reasoning with reward shaping. arXiv preprint arXiv:1808.10568. Cited by: §1, §2, §2, §3.2, §3.2, §3.2, §3.2, §3.2, §3.2, §4.1, §4.2, §4.3, §4.3, §4.4, §4.6, §4.7, §4.
  • [14] X. Lin, P. Subasic, and H. Yin (2019) Rel4KC: a reinforcement learning agent for knowledge graph completion and validation. Cited by: §1.
  • [15] F. Liu, R. Tang, X. Li, W. Zhang, Y. Ye, H. Chen, H. Guo, and Y. Zhang (2018) Deep reinforcement learning based recommendation with explicit user-item interactions modeling. arXiv preprint arXiv:1810.12027. Cited by: §1, §2.
  • [16] S. Muggleton (1991)

    Inductive logic programming

    .
    New generation computing 8 (4), pp. 295–318. Cited by: §1.
  • [17] A. Y. Ng, D. Harada, and S. Russell (1999) Policy invariance under reward transformations: theory and application to reward shaping. In ICML, Vol. 99, pp. 278–287. Cited by: §3.2.
  • [18] B. Peng, X. Li, L. Li, J. Gao, A. Celikyilmaz, S. Lee, and K. Wong (2017) Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning. arXiv preprint arXiv:1704.03084. Cited by: §1, §2.
  • [19] M. Qu, J. Tang, and J. Han (2018) Curriculum learning for heterogeneous star network embedding via deep reinforcement learning. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 468–476. Cited by: §1.
  • [20] I. V. Serban, C. Sankar, M. Germain, S. Zhang, Z. Lin, S. Subramanian, T. Kim, M. Pieper, S. Chandar, N. R. Ke, et al. (2017) A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349. Cited by: §1, §2.
  • [21] A. Sharma and K. D. Forbus (2010) Graph-based reasoning and reinforcement learning for improving q/a performance in large knowledge-based systems. In 2010 AAAI Fall Symposium Series, Cited by: §1.
  • [22] Y. Shen, J. Chen, P. Huang, Y. Guo, and J. Gao (2018) M-walk: learning to walk over graphs using monte carlo tree search. In Advances in Neural Information Processing Systems, pp. 6786–6797. Cited by: §1, §2.
  • [23] Y. Shen, J. Chen, P. Huang, Y. Guo, and J. Gao (2018) ReinforceWalk: learning to walk in graph with monte carlo tree search. Cited by: §2.
  • [24] Y. Shen, N. Ding, H. Zheng, Y. Li, and M. Yang (2020) Modeling relation paths for knowledge graph completion. IEEE Transactions on Knowledge and Data Engineering. Cited by: §1, §2.
  • [25] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of go with deep neural networks and tree search. nature 529 (7587), pp. 484. Cited by: §2.
  • [26] R. Socher, D. Chen, C. D. Manning, and A. Ng (2013)

    Reasoning with neural tensor networks for knowledge base completion

    .
    In Advances in neural information processing systems, pp. 926–934. Cited by: §1, §2.
  • [27] T. Trouillon, J. Welbl, S. Riedel, É. Gaussier, and G. Bouchard (2016) Complex embeddings for simple link prediction. In International Conference on Machine Learning, pp. 2071–2080. Cited by: §1, §2, §2, §4.
  • [28] R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8 (3-4), pp. 229–256. Cited by: §3.2.
  • [29] Y. Xian, Z. Fu, S. Muthukrishnan, G. de Melo, and Y. Zhang (2019) Reinforcement knowledge graph reasoning for explainable recommendation. arXiv preprint arXiv:1906.05237. Cited by: §1, §2, §4.1.
  • [30] W. Xiong, T. Hoang, and W. Y. Wang (2017) Deeppath: a reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv:1707.06690. Cited by: §1, §2, §3.2, §3.2, §3.2.
  • [31] W. Xiong, M. Yu, S. Chang, X. Guo, and W. Y. Wang (2018) One-shot relational learning for knowledge graphs. arXiv preprint arXiv:1808.09040. Cited by: §1, §2, §3.4.
  • [32] Z. Xu, V. Tresp, K. Yu, and H. Kriegel (2012) Infinite hidden relational models. arXiv preprint arXiv:1206.6864. Cited by: §1.
  • [33] B. Yang, W. Yih, X. He, J. Gao, and L. Deng (2014) Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575. Cited by: §1, §2.
  • [34] B. Yang, W. Yih, X. He, J. Gao, and L. Deng (2015) Embedding entities and relations for learning and inference in knowledge bases. In ICLR, Cited by: §3.4.
  • [35] H. Yao, C. Zhang, Y. Wei, M. Jiang, S. Wang, J. Huang, N. V. Chawla, and Z. Li (2019) Graph few-shot learning via knowledge transfer. arXiv preprint arXiv:1910.03053. Cited by: §2.
  • [36] C. Zhang, H. Yao, C. Huang, M. Jiang, Z. Li, and N. V. Chawla (2020) Few-shot knowledge graph completion. In AAAI, Cited by: §1, §2, §3.4.
  • [37] C. Zhang (2020) Learning from heterogeneous networks: methods and applications. In Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 927–928. Cited by: §2.
  • [38] G. Zheng, F. Zhang, Z. Zheng, Y. Xiang, N. J. Yuan, X. Xie, and Z. Li (2018) DRN: a deep reinforcement learning framework for news recommendation. In Proceedings of the 2018 World Wide Web Conference, pp. 167–176. Cited by: §1, §2.