Contextualized Graph Attention Network for Recommendation with Item Knowledge Graph

04/24/2020 ∙ by Susen Yang, et al. ∙ Nanyang Technological University Agency for Science, Technology and Research 0

Graph neural networks (GNN) have recently been applied to exploit knowledge graph (KG) for recommendation. Existing GNN-based methods explicitly model the dependency between an entity and its local graph context in KG (i.e., the set of its first-order neighbors), but may not be effective in capturing its non-local graph context (i.e., the set of most related high-order neighbors). In this paper, we propose a novel recommendation framework, named Contextualized Graph Attention Network (CGAT), which can explicitly exploit both local and non-local graph context information of an entity in KG. Specifically, CGAT captures the local context information by a user-specific graph attention mechanism, considering a user's personalized preferences on entities. Moreover, CGAT employs a biased random walk sampling process to extract the non-local context of an entity, and utilizes a Recurrent Neural Network (RNN) to model the dependency between the entity and its non-local contextual entities. To capture the user's personalized preferences on items, an item-specific attention mechanism is also developed to model the dependency between a target item and the contextual items extracted from the user's historical behaviors. Experimental results on real datasets demonstrate the effectiveness of CGAT, compared with state-of-the-art KG-based recommendation methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Personalized recommender systems have been widely applied in different application scenarios [8, 10, 9, 22]. The knowledge graph (KG) including rich semantic relations between items has recently been shown to be effective in improving recommendation performances [15]. Essentially, KG is a heterogeneous network where nodes correspond to entities and edges correspond to relations. The main challenge of incorporating KG for recommendation is how to effectively exploit the relations between entities and the graph structure of KG. In practice, one group of methods impose well-designed additive regularization loss term to capture the KG structure [24, 3]. However, they can not explicitly consider the semantic relation information of KG into the recommendation model. Another group of methods focus on extracting the high-order connectivity information between entities along paths which are always manually designed or selected based on special criteria [23, 25]. These approaches may heavily rely on domain knowledge. Recently, the quick development of graph neural networks (GNN) [26] motivates the application of graph convolutional networks (GCN) [5] and graph attention networks (GAT) [17] in developing end-to-end KG-based recommender systems [19, 21], which can aggregate the context information from the structural neighbors of an entity in KG.

Figure 1: A simple example showing the user-item interactions and the item knowledge graph.

Although GNN-based recommendation methods can automatically capture both the structure and semantic information of KG, they may still have the following deficiencies. Firstly, most GNN-based methods lack of modeling user-specific preferences on entities, when aggregating the local graph context (i.e., the first-order neighbors) of an entity in KG. As shown in Figure 1, both users have interactions with the item . However, they prefer may due to different reasons. For example, prefers because of the attribute entity of in KG, while pays more attentions to its attribute entity . The methods that ignore this situation are insufficient to model users’ personalized preferences. Secondly, the non-local graph context (i.e., the set of most related high-order neighbors) of an entity in KG is not explicitly captured in existing GNN-based recommendation methods. In KG, some items may have very few neighbors, thus some important entities may not be directly connected to them. For example, in Figure 1, the item has only one entity linked with it, thus the aggregation of local context information for the entity is not enough to represent . Moreover, we can also observe that entity is connected with along many multi-hop paths, which demonstrates the importance of to . Exiting GNN-based methods [19, 21] address this limitation by feature propagation layer by layer. However, this may weaken the effects of farther connected entities or even bring noise information.

To address these issues, we propose a novel recommendation framework, namely Contextualized Graph Attention Network (CGAT), which explicitly exploits both the local and non-local context of an entity in KG, as well as the item context extracted from users’ historical data. The contributions made in this paper are as follows: (1) We propose a user-specific graph attention mechanism to aggregate the local context information in KG for recommendation, based on the intuition that different users may have different preferences on the same entity in KG; (2) We propose to explicitly exploit the non-local context information in KG, by developing a biased random walk sampling process to extract the non-local context of an entity, and employing a recurrent neural network (RNN) to model the dependency between the entity and its non-local context in KG; (3) We develop an item-specific attention mechanism that exploits the context information extracted from a user’s historical behavior data to model her preferences on items; (4) We perform extensive experiments on real datasets to demonstrate the effectiveness of CGAT. Experimental results indicate that CGAT usually outperforms state-of-the-art KG-based recommendation methods.

2 Related Work

KG-based recommendation methods can be categorized into three main groups: regularization-based methods, path-based methods, and GNN-based methods. The regularization-based methods exploit the KG structure by imposing regularization terms into the loss function used to learn entity embedding. For example, CKE 

[24] is a representative method, which uses TransR [7] to derive semantic entity representations from item KG. The KTUP model [3] is proposed to jointly train the personalized recommendation and KG completion tasks, by sharing the item embedding. The high-order feature interactions between items and entities can be further approximated by a cross&compress unit [20]. These methods are highly flexible. However, they lack an explicit modeling of the semantic relations in KG. The path-based methods exploit various connection patterns between entities. For example, the recent works [23, 13]estimate the meta-path based similarities for recommendation. In [25], matrix factorization and factorization machine techniques are integrated to assemble different meta-path information. To address the limitation of manually designed meta-paths, different selection rules or propagation methods have been proposed [18]. For example, in [16], the length condition is used to extract paths and then a batch of RNN are applied to aggregate the path information. Besides the length, multi-hop relational paths can also be inducted based on item associations [11]. Recently, the GNN-based methods aim to develop the end-to-end KG-based recommender systems. For example, the KGNN-LS model [19] employs a trainable function that calculates the relation weights for each user to transfer the KG into a user-specific weighted graph, and then applies GCN on this graph to learn item embedding. In [21], the graph attention mechanism is adopted to aggregate and propagate local neighborhood information of an entity, without considering users’ personalized preferences on entities. On summary, these GNN-based methods implicitly aggregate the high-order neighborhood information via layer by layer propagation, instead of explicitly modeling the dependency between an entity and its high-order neighbors.

Figure 2: (a) The framework of the CGAT. From left to right, it exploits the KG context and interaction graph context to predict a user’s preference score on a candidate item; (b) Exploiting local graph context by applying a user-specific graph attention mechanism to KG; (c) Exploiting non-local graph context by bias random walk based sampling (BRWS) and GRU module; (d) Exploiting interaction graph context by applying an item-specific attention mechanism to the user’s historical items.

3 Contextualized Graph Attention Network

We assume the item KG is available, where denotes the set of entities, denotes the set of relations, and denotes the set of entity-relation-entity triples describing the KG structure. Here , , and denote the head entity, relation, and tail entity of a knowledge triple, respectively. and are used to denote the embedding of the entity and relation , where denotes the dimensionality of latent space. Note that the items are treated as a special type of entities in the KG. In addition, we denote the set of users by , the set of items by , and all the observed user-item interactions by . For each user , we denote the set of items she has interacted by , and use to denote her embedding. Figure 2 shows the structure details of the proposed CGAT model.

3.1 Exploiting Knowledge Graph Context

CGAT exploits KG context from two aspects: (a) local context information, and (b) non-local context information.

3.1.1 Local Graph Context

For the entity corresponding to an item, it is always linked with many other entities that can enrich its information in KG. To consider users’ personalized preferences on entities, we develop a user-specific graph attention mechanism to aggregate the neighborhood information of an entity in KG. For different users, we compute different attention scores for the same neighborhood entity. The embedding of neighborhood entities can then be aggregated based on the user-specific attention scores. Here, we denote the local neighbors of an entity by , and define as the local graph context of in KG. Moreover, we also argue that the neighborhood entities may have different impacts, if they are connected via different relations. To incorporate relation into the attention mechanism, we firstly integrate the embedding of a neighborhood entity and the embedding of corresponding relation

by the following linear transformation,

(1)

where is the concatenation operation, is the weight matrix. The user-specific attention score that describes the importance of the entity to the entity , for a target user , is defined as follows,

(2)

The operation

is performed by a single-layer feed forward neural network, which is defined as follows,

(3)

where is a non-linear transform of defined as . Here, , , and

are the weight matrices and bias vectors respectively. Given the coefficient of each neighboring entity of

, we compute the linear combination of their embedding to obtain the local neighborhood embedding of as follows,

(4)

Then, we aggregate the embedding of entity and it’s local neighborhood embedding to form a local contextual embedding for as follows,

(5)

where and are the weight matrix and bias vector of the aggregator.

3.1.2 Non-Local Graph Context

The user-specific graph attention network explicitly aggregates the local neighbor (one-hop) information of a target entity to enrich the representation of the target entity. However, this is not enough to capture the non-local context of an entity in KG, and also has weak representation ability for the nodes which have few connections in KG. To offset this gap, we propose a biased random walk based GRU module to aggregate non-local context information of entities.

The biased random walk sampling (BRWS) procedure is used to extract the non-local context of a target entity . To achieve a wider depth-first search, we repeat biased random walk from to obtain paths, which have a fixed length

. The walk iteratively travels to the neighbors of current entity with a probability

, which is defined as follows,

(6)

where is the -th entity of a path, denotes the root entity . To encourage wider search, we empirically set . After obtaining the paths and entities by walk, we sort entities according to their frequency in walks in descending order, and choose a set of top-ranked entities orderly. These entities are defined as the non-local graph context of the entity in KG, and denoted by . In the experiments, we empirically set , and set the parameters , , and to 0.2, 15, and 8, respectively.

In this work, we employ GRU to model the dependency between an entity and its non-local context , because GRU can yield better performance in processing sequence data (i.e., can be seen as a frequency sequence data). Indeed, the more frequently an entity appears in random walks, the more important it is to the target entity . Based on this intuition, we input into GRU in reverse order, and use the last step output as the embedding of , which is denoted by,

(7)

where denotes the reverse set of . Then, we aggregate and to form the non-local contextual embedding for as follows,

(8)

Here, we use the same aggregator parameters as in Eq (5). Given the embeddings of local and non-local context of in KG, we apply a gate mechanism to integrate these two embeddings by learning the weights in each dimension as,

(9)

where is a learnable vector,

denotes the sigmoid function. As items are a special type of entities in KG, we can use Eq. (

9) to compute the context embedding of item , considering its local context and non-local context in KG. Then, we concatenate and to obtain the contextualized representation of an item as .

3.2 Exploiting Interaction Graph Context

In practice, a user’s historical items are usually used to describe her potential interests [14]. For example, the classical SVD++ model [6] treats a user ’s historical items as the implicit feedback given by , and model the influences of on a target item for recommendation. Following similar spirit, we define as the interaction graph context of user . Then, we develop an item-specific attention mechanism to model the influences of on . The basic assumption is that a user’s historical item may have different importance in estimating her preferences on different candidate items. For each item , its relevance weight with respect to the target item is defined as,

(10)

where is a weight vector, is the bias, and are the contextualized representations of items and . Then, we define the embedding of the graph context , with respect to a target item , as follows,

(11)

A non-linear transformation, where ReLU is the activation function, is then used to aggregate

and to form the contextual embedding for as follows,

(12)

where and are the weight matrix and bias vector. We concatenate and to form the contextualized representation for as . The prediction of ’s preference on can be defined as .

0:  Observed interactions , knowledge graph
0:  Score function
1:  Randomly initialize all parameters
2:  Construct the set and based on and ;
3:  for  do
4:     Sample a batch of tuples from ;
5:     Sample a batch of tuples from ;
6:     Compute gradients of Eq. (16) with respect to by back-propagation, based on tuples in and ;
7:     Update by gradient descent algorithm (i.e., Adam) with learning rate ;
8:  end for
9:  return  
Algorithm 1 CGAT Optimization Algorithm

3.3 Learning Algorithm

The Bayesian personalized ranking (BPR) optimization criterion [12] is used to learn the model parameters of CGAT. BPR assumes that the interacted items should have higher ranking scores than the un-interacted items for each user. Here, we define the BPR loss as follows,

(13)

where is constructed by negative sampling. Empirically, for each , we randomly sampling items from in the experiments. As we also need to learn the embedding of entities and relations in KG, we design a regularization loss based on the KG structure. Specifically, for each triple , we first define the following score to describe the distance between the head entity and the tail entity via relation in the latent space,

(14)

Then, we define the regularization loss as follows,

(15)

where is constructed by randomly sampling an entity from , for each . The motivation is that, in the latent space, the distance between an entity and its directly connected neighbor should be smaller than the distance between and the entity that is not directly connected to , via relation . Then, the model parameters can be learned by solving the following objective function,

(16)

where denotes all the parameters of CGAT, and are the regularization parameters. The problem in Eq. (16) is solved by a gradient descent algorithm. The details of the optimization algorithm are summarized in Algorithm 1.

In the implementation of CGAT, we randomly sample neighbors from for a target entity , and historical items from for a target user , to compute the attention weights defined in Eq. (2) and Eq. (10) respectively. This trick can help keep the computational pattern of each mini-batch fixed and improve the computation efficiency. Moreover, we also set the size of non-local context to . In model training, and are fixed. Let denote the number of sampled user-item interactions in each batch. The time complexity of biased random walk sampling procedure is , which can be performed before training. In each iteration, to exploit KG context, the user-specific graph attention mechanism and the GRU module have computational complexity . The complexity of exploiting interaction graph context is . The overall complexity of each mini-bacth iteration is , which is linear with all hyper-parameters except for .

4 Experiments

4.1 Experimental Settings

Datasets: The experiments are performed on three public datasets: Last-FM111https://grouplens.org/datasets/hetrec-2011/, Movielens-1M222https://grouplens.org/datasets/movielens/1m/, and Book-Crossing333http://www2.informatik.uni-freiburg.de/cziegler/BX/ (respectively denoted by FM, ML, and BC). Following [18, 20, 19], we keep all the ratings on FM and BC datasets as observed implicit feedback, due to data sparsity. For ML dataset, we keep ratings larger than 4 as implicit feedback. The KGs of these datasets are constructed by Microsoft Satori, and are currently public available444https://github.com/hwwang55. As introduced in [20], only the triples from the whole KG with a confidence level greater than 0.9 are retained. The sizes of ML and BC KGs are further reduced by only selecting the triples where the relation name contains ”film” and ”book”, respectively. For these datasets, we match the items and entities in sub-KGs by their names (e.g., head, film.film.name, tail for ML). The items matching no entities or multiple entities are removed. Table 1 summarizes the statistics of these experimental datasets.

FM ML BC
#Users 1,872 6,036 17,860
#Items 3,846 2,347 14,967
#Interactions 21,173 376,886 69,876
#Density 0.29% 2.66% 0.026%
#Entities 9,366 7,008 77,903
#Relations 60 7 25
#Triples 15,518 20,782 151,500
Table 1: Statistics of the experimental datasets.

Setup and Metrics: For each dataset, we randomly select 60% of the observed user-item interactions for model training, and choose another 20% of interactions for parameter tuning. The remaining 20% of interactions are used as testing data. The quality of the top-

item recommendation is assessed by three widely used evaluation metrics: Precision@

, Recall@, and Hit Ratio@. In the experiments, we set to 10, 20, and 50. For each metric, we first compute the accuracy for each user on the testing data, and then report the averaged accuracy over all users.

Baseline Methods: We compare CGAT with the following models: (1) CFKG [1] integrates the multi-type user behaviors and item KG into a unified graph, and employs TransE [2] to learn entity embedding. (2) RippleNet [18] exploits KG information by propagating a user’s preferences over the set of entities along paths in KG rooted at her historical items; (3) MKR [20] is a multi-task feature learning approach that uses KG embedding task to assist the recommendation task; (4) KGNN-LS [19] applies GCN on KG to compute the item embedding by propagating and aggregating the neighborhood information on item KG. (5) KGAT [21] employs graph attention mechanism on KG to exploit the graph context for recommendation.

Implementation Details: For CGAT, the dimensionality of latent space is chosen from . The number of local neighbors of an entity and the number of a user’s historical items used in model training are selected from . The regularization parameters and are chosen from . The learning rate is chosen from

. The hyper-parameters of baseline methods are set following original papers. For all methods, optimal hyper-parameters are determined by the performances on the validation data. We implement CGAT by Pytorch, and the Adam optimizer 

[4] is used to learn the model parameters.

Datasets Methods P@10 R@10 HR@10 P@20 R@20 HR@20 P@50 R@50 HR@50
FM CFKG 0.0280 0.1168 0.2362 0.0222 0.1857 0.3404 0.0135 0.2812 0.4773
RippleNet 0.0285 0.1214 0.2423 0.0229 0.1948 0.3628 0.0157 0.3260 0.5336
MKR 0.0278 0.1162 0.2356 0.0215 0.1820 0.3356 0.0138 0.2877 0.4809
KGNN-LS 0.0284 0.1186 0.2441 0.0216 0.1824 0.3398 0.0136 0.2828 0.4809
KGAT 0.0466 0.1886 0.3604 0.0341 0.2756 0.4803 0.0206 0.4151 0.6426
CGAT 0.0512 0.2106 0.4022 0.0369 0.2994 0.5203 0.0218 0.4413 0.6687
ML CFKG 0.1054 0.1038 0.5680 0.0896 0.1753 0.7126 0.0633 0.2991 0.8388
RippleNet 0.1271 0.1251 0.6227 0.1043 0.2008 0.7474 0.0758 0.3442 0.8667
MKR 0.1376 0.1370 0.6581 0.1154 0.2192 0.7765 0.0848 0.3793 0.8852
KGNN-LS 0.1311 0.1310 0.6419 0.1126 0.2172 0.7766 0.0833 0.3762 0.8811
KGAT 0.1533 0.1608 0.7090 0.1274 0.2541 0.8179 0.0910 0.4189 0.9066
CGAT 0.1575 0.1674 0.7219 0.1288 0.2608 0.8264 0.0916 0.4311 0.9191
BC CFKG 0.0155 0.0725 0.1391 0.0101 0.0904 0.1745 0.0061 0.1291 0.2435
RippleNet 0.0147 0.0706 0.1336 0.0099 0.0880 0.1736 0.0060 0.1261 0.2429
MKR 0.0154 0.0732 0.1386 0.0105 0.0920 0.1811 0.0063 0.1306 0.2496
KGNN-LS 0.0155 0.0730 0.1411 0.0104 0.0910 0.1797 0.0062 0.1306 0.2454
KGAT 0.0132 0.0572 0.1202 0.0094 0.0776 0.1600 0.0063 0.1172 0.2362
CGAT 0.0161 0.0645 0.1402 0.0119 0.0920 0.1909 0.0078 0.1412 0.2718
Table 2: Performances of different recommendation algorithms. The best results are in bold faces and the second best results are underlined. indicates CGAT significantly outperforms the competitors with using Wilcoxon signed rank significance test.

4.2 Performance Comparison

Table 2 summarizes the results on different datasets. We make the following observations. On FM and ML datasets, KGAT achieves the best performances among all baselines. On BC dataset, MKR achieves comparable results with KGNN-LS, and outperforms CFKG, RippleNet, and KGAT. The KG and interaction graphs on BC dataset are very sparse. MKR jointly solves the KG embedding and recommendation tasks by learning high-order feature interactions between items and entities. The cross&compress units are effective to transfer knowledge between the user-item interaction graph and KG, thus can help solve the data sparsity problem. Moreover, CGAT usually achieves the best performances on all datasets, in terms of all metrics. In most of the scenarios (i.e., 23 among 27 evaluation metrics), the proposed CGAT method significantly outperforms baseline methods with , using the Wilcoxon signed rank significance test. Over all datasets, on average, CGAT outperforms CFKG, RippleNet, MKR, KGNN-LS, and KGAT by 26.07%, 21.32%, 22.29%, 21.92%, 9.56%, respectively, in terms of HR@20. These results demonstrate the effectiveness of CGAT in exploiting both the KG context and users’ historical interaction context for recommendation.

4.3 Ablation Study

Dataset CGATw/o L CGATw/o G CGATw/o UA CGAT
FM 0.5118 0.5167 0.5136 0.5203
ML 0.8193 0.8111 0.8215 0.8264
BC 0.1884 0.1864 0.1817 0.1909
Table 3: Performances of CGAT variants estimated by HR@20.

Moreover, we also conduct ablation studies to evaluate the performances of the following CGAT variants: (1) CGATw/o L deletes the local context embedding of item from CGAT and only considers the non-local context embedding as final context embedding, i.e., the coefficient in Eq.(9) is set to ; (2) CGATw/o G removes the non-local context embedding of item from original model, which is contrast to CGATw/o L model; (3) CGATw/o UA removes the user’s embedding in exploiting the local context information in KG (i.e., removing in Eq. (3)).

Due to space limitation, we only report the recommendation accuracy measured by HR@20. We summarize the results in Table 3, and have the following findings. CGAT consistently outperforms the variants CGATw/o L and CGATw/o G, indicating both local and non-local context in KG are essential for recommendation. CGAT achieves better performance than CGATw/o UA. This demonstrates the user-specific graph attention mechanism is more suitable for personalized recommendation than simple attention mechanism that can not capture users’ personalized preferences. CGATw/o L is slightly superior than CGATw/o G on ML and BC datasets. This indicates that non-local context information plays a complementary role to the local context information, and sometimes may be more important than local context information in improving the recommendation accuracy.

4.4 Parameter Sensitivity Study

Figure 3 summarizes the performances of CGAT with respect to (w.r.t.) different settings of key parameters. As the size of neighboring entities in KG usually varies for different items, we study how fixed size of sampled neighbors would affect the performance. From Figure 3(a), we can note that CGAT achieves the best performance when is set to 4, while larger does not help further improve the performance. This optimal setting of is close to the average number of neighbors of an entity in KG, which is 3.31 on FM dataset. Then, we vary the number of a user’s historical items used to represent her potential preferences. As shown in Figure 3(a), the best performance is achieved by setting to 16. When is larger than 16, further increase of would reduce the performance. Figure 3(b) shows the performance trend of CGAT w.r.t. different settings of . The performances achieved by setting to and are better than that achieved by setting to 0. This observation demonstrates that the KG structure constraint in Eq. (15) can help improve the recommendation accuracy. Moreover, we also study the impacts of the number of sampled paths and the path length in the BRWS module. From Figure 3(c), we can note the best performance is achieved by setting to 15. This indicates the most relevant entities in the non-local neighborhood of an entity can be captured by performing 15 times random walk sampling. As shown in Figure 3(d), better performance can be achieved by setting in the range between 4 and 12. Further increasing causes more training time, however sometimes may cause the decrease in recommendation performances.

Figure 3: Performances of CGAT on FM dataset, w.r.t. different settings of , , , , and .

5 Conclusion and Future Work

This paper proposes a novel recommendation model, called Context-aware Graph Attention Network (CGAT), which explicitly exploits both local and non-local context information in KG and the interaction context information given by users’ historical behaviors. Specifically, CGAT aggregates the local context information in KG by a user-specific graph attention mechanism, which captures users’ personalized preferences on entities. To incorporate the non-local context in KG, a bias random walk based sampling process is used to extract important entities for the target entity over entire KG, and a GRU module is employed to explicitly aggregate these entity embedding. In addition, CGAT utilizes an item-specific attention mechanism to model the influences between items. The superiority of CGAT has been validated by comparing with state-of-the-art baselines on three datasets. For future work, we intend to develop different aggregation strategies to integrate the context information in KG and interaction graph to improve recommendation accuracy.

References

  • [1] Q. Ai, V. Azizi, X. Chen, and Y. Zhang (2018) Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11 (9), pp. 137. Cited by: §4.1.
  • [2] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko (2013) Translating embeddings for modeling multi-relational data. In NIPS’13, pp. 2787–2795. Cited by: §4.1.
  • [3] Y. Cao, X. Wang, X. He, Z. Hu, and T. Chua (2019) Unifying knowledge graph learning and recommendation: towards a better understanding of user preferences. In WWW’19, pp. 151–161. Cited by: §1, §2.
  • [4] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
  • [5] T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. In ICLR’17, Cited by: §1.
  • [6] Y. Koren (2008) Factorization meets the neighborhood: a multifaceted collaborative filtering model. In KDD’08, pp. 426–434. Cited by: §3.2.
  • [7] Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu (2015) Learning entity and relation embeddings for knowledge graph completion. In AAAI’15, Cited by: §2.
  • [8] Y. Liu, W. Wei, A. Sun, and C. Miao (2014) Exploiting geographical neighborhood characteristics for location recommendation. In CIKM’14, pp. 739–748. Cited by: §1.
  • [9] Y. Liu, L. Zhao, G. Liu, X. Lu, P. Gao, X. Li, and Z. Jin (2018) Dynamic bayesian logistic matrix factorization for recommendation with implicit feedback.. In IJCAI’18, pp. 3463–3469. Cited by: §1.
  • [10] Y. Liu, P. Zhao, X. Liu, M. Wu, L. Duan, and X. Li (2017) Learning user dependencies for recommendation. In IJCAI’17, pp. 2379–2385. Cited by: §1.
  • [11] W. Ma, M. Zhang, Y. Cao, W. Jin, C. Wang, Y. Liu, S. Ma, and X. Ren (2019) Jointly learning explainable rules for recommendation with knowledge graph. In WWW’19, pp. 1210–1221. Cited by: §2.
  • [12] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme (2009) BPR: bayesian personalized ranking from implicit feedback. In UAI’09, pp. 452–461. Cited by: §3.3.
  • [13] C. Shi, Z. Zhang, P. Luo, P. S. Yu, Y. Yue, and B. Wu (2015) Semantic path based personalized recommendation on weighted heterogeneous information networks. In CIKM’15, pp. 453–462. Cited by: §2.
  • [14] Y. Shi, M. Larson, and A. Hanjalic (2014) Collaborative filtering beyond the user-item matrix: a survey of the state of the art and future challenges. ACM Computing Surveys 47 (1), pp. 3. Cited by: §3.2.
  • [15] Z. Sun, Q. Guo, J. Yang, H. Fang, G. Guo, J. Zhang, and R. Burke (2019) Research commentary on recommendations with side information: a survey and research directions. Electronic Commerce Research and Applications 37, pp. 100879. Cited by: §1.
  • [16] Z. Sun, J. Yang, J. Zhang, A. Bozzon, L. Huang, and C. Xu (2018) Recurrent knowledge graph embedding for effective recommendation. In RecSys’18, pp. 297–305. Cited by: §2.
  • [17] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio (2018) Graph attention networks. In ICLR’18, Cited by: §1.
  • [18] H. Wang, F. Zhang, J. Wang, M. Zhao, W. Li, X. Xie, and M. Guo (2018) Ripplenet: propagating user preferences on the knowledge graph for recommender systems. In CIKM’18, pp. 417–426. Cited by: §2, §4.1, §4.1.
  • [19] H. Wang, F. Zhang, M. Zhang, J. Leskovec, M. Zhao, W. Li, and Z. Wang (2019) Knowledge graph convolutional networks for recommender systems with label smoothness regularization. In KDD’19, Cited by: §1, §1, §2, §4.1, §4.1.
  • [20] H. Wang, F. Zhang, M. Zhao, W. Li, X. Xie, and M. Guo (2019) Multi-task feature learning for knowledge graph enhanced recommendation. In WWW’19, pp. 2000–2010. Cited by: §2, §4.1, §4.1.
  • [21] X. Wang, X. He, Y. Cao, M. Liu, and T. Chua (2019) KGAT: knowledge graph attention network for recommendation. In KDD’19, Cited by: §1, §1, §2, §4.1.
  • [22] Q. Wu, Y. Liu, C. Miao, B. Zhao, Y. Zhao, and L. Guan (2019) PD-gan: adversarial learning for personalized diversity-promoting recommendation. In IJCAI’19, pp. 3870–3876. Cited by: §1.
  • [23] X. Yu, X. Ren, Y. Sun, B. Sturt, U. Khandelwal, Q. Gu, B. Norick, and J. Han (2013) Recommendation in heterogeneous information networks with implicit user feedback. In RecSys’13, pp. 347–350. Cited by: §1, §2.
  • [24] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W. Ma (2016) Collaborative knowledge base embedding for recommender systems. In KDD’16, pp. 353–362. Cited by: §1, §2.
  • [25] H. Zhao, Q. Yao, J. Li, Y. Song, and D. L. Lee (2017) Meta-graph based recommendation fusion over heterogeneous information networks. In KDD’17, pp. 635–644. Cited by: §1, §2.
  • [26] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun (2018) Graph neural networks: a review of methods and applications. arXiv preprint arXiv:1812.08434. Cited by: §1.