SAIN: Self-Attentive Integration Network for Recommendation

05/27/2019
by   Seoungjun Yun, et al.
Korea University
0

With the growing importance of personalized recommendation, numerous recommendation models have been proposed recently. Among them, Matrix Factorization (MF) based models are the most widely used in the recommendation field due to their high performance. However, MF based models suffer from cold start problems where user-item interactions are sparse. To deal with this problem, content based recommendation models which use the auxiliary attributes of users and items have been proposed. Since these models use auxiliary attributes, they are effective in cold start settings. However, most of the proposed models are either unable to capture complex feature interactions or not properly designed to combine user-item feedback information with content information. In this paper, we propose Self-Attentive Integration Network (SAIN) which is a model that effectively combines user-item feedback information and auxiliary information for recommendation task. In SAIN, a self-attention mechanism is used in the feature-level interaction layer to effectively consider interactions between multiple features, while the information integration layer adaptively combines content and feedback information. The experimental results on two public datasets show that our model outperforms the state-of-the-art models by 2.13

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/05/2021

Attentive Knowledge-aware Graph Convolutional Networks with Collaborative Guidance for Recommendation

To alleviate data sparsity and cold-start problems of traditional recomm...
02/20/2019

NAIRS: A Neural Attentive Interpretable Recommendation System

In this paper, we develop a neural attentive interpretable recommendatio...
05/12/2021

Looking at CTR Prediction Again: Is Attention All You Need?

Click-through rate (CTR) prediction is a critical problem in web search,...
07/27/2020

Towards Multi-Language Recipe Personalisation and Recommendation

Multi-language recipe personalisation and recommendation is an under-exp...
05/18/2022

Attention-based Multimodal Feature Representation Model for Micro-video Recommendation

In recommender systems, models mostly use a combination of embedding lay...
09/23/2021

Modeling Dynamic Attributes for Next Basket Recommendation

Traditional approaches to next item and next basket recommendation typic...
03/04/2021

IACN: Influence-aware and Attention-based Co-evolutionary Network for Recommendation

Recommending relevant items to users is a crucial task on online communi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

With the growing importance of recommender systems, numerous models for highly personalized recommendation have been proposed. Latent Factor Models (LFMs) such as Matrix Factorization (MF) (Koren et al., 2009) and Factorization Machine (FM) (Rendle, 2010) are widely used. Latent factor models can learn user and item representations in an end-to-end fashion without any explicit feature engineering.

For example, MF utilizes only user-item interaction data, assuming that similar items would be located close to each other in a latent space during the learning process. Based on this assumption, MF models with deep neural network have been applied to capture more complex user-item relations in a non-linear way(He et al., 2017). Despite their simple structures, MF models have proven to be effective in various recommendation tasks.

However, since MF models utilize only user-item interaction data, they cannot effectively learn user and item representations when there is an insufficient amount of interaction data. If a user or an item is newly entered into the system or an item is not popular, there can be a lack of data. This problem is known as the cold-start problem. To overcome this limitation, models that incorporate auxiliary information have been proposed recently. For example, in (Chen et al., 2017) and (Chen et al., 2018), multiple features are combined based on user preferences using attention mechanism. However, most of the proposed models are not designed to consider high order dependencies between features. The models assume that features are not affected by other features, which is not realistic. As illustrated in Figure 1, the same user gives very different ratings to two movies in the same genre (Action) and with the same actor (Brad Pitt). The figure shows that the features of an item interact with each other to create a synergetic effect and determine the characteristics of the item. As determining which feature combinations are synergetic is very costly, we need a model that can automatically learn combinative feature representations.

Figure 1. Two movies in the same genre but with different characteristics.

Factorization Machine (FM) models, which are another line of LFMs, are designed to consider feature interactions. FM models learn all features related to user and item entities as latent factors and predict ratings based on the second-order interactions between the features. FM models with deep neural networks are proposed to learn higher-order interactions between features and capture non-linear relationships between features (He and Chua, 2017; Lian et al., 2018). Although FM models have proven to be effective on sparse data, they are not properly designed to combine user-item feedback information with content information. In (liu2018context)

, treating users, items and features as the same level entities may over-estimate the influence of content information, thus weakening the impacts of user-item feedback information. Side information might become a noise for a user with relatively large interactions. In that case, the model should give more weight to user-item feedback information.

To overcome the limitations of MF and FM, we propose Self-Attentive Integration Network (SAIN) in this paper. The contributions of this work can be summarized as follows. We designed a model that effectively combines user-item feedback information and content information to infer user preferences. To consider complex feature interactions, we adopted a Multi-head Self-attention network that can capture interactions between entities (Vaswani et al., 2017). We conducted experiments on the public datasets MovieLens and Weibo-Douban. The experimental results show that our SAIN model outperforms the state-of-the-art MF and FM models in the rating prediction task.

2. Self-Attentive Integration Network

In this section, we introduce our Self-Attentive Integration Network (SAIN) model which can be used for the rating prediction task. SAIN effectively combines user-item feedback information and content information to infer user preferences. Figure 2 illustrates the overall architecture of SAIN. The network is composed of the following three layers: 1) Feature-Level Interaction Layer which generates user/item combinative feature representations while capturing high-order feature interactions, 2) Information Integration Layer which combines user preference information from user-item feedback and content information from combinative feature representations, and 3) Output Layer which predicts user ratings on items based on user/item representations.

2.1. Feature-Level Interaction Layer

The feature-level interaction layer learns how to effectively capture high-order interactions between multiple features to produce combinative feature representations. We adopted Multi-head Self-attention which is widely used for capturing interactions between entities. Let and denote a user and an item, respectively. Each user has a set of content features , where is the number of user content features. Similarly, each item has a set of content features , where is the number of item features. Each element of a content feature set represents a

-dimensional latent vector. To consider interactions between items and those between user features, we combine the set of user features and the set of item features and use the combined set

as input for our SAIN model.

All latent vectors of each feature are converted to -dimensional query, key, and value vectors for each attention head. We then measure the interaction between the and features using scaled dot-product attention under a specific attention head as follows:

(1)

where . We used only features with the highest attention scores to filter less informative interactions and avoid overfitting. Each feature vector is obtained by summing all the weighted feature vectors with the highest attention scores:

(2)

Different feature representations are obtained from each attention head . By projecting original latent vectors to multiple different subspaces, each head can capture various aspects in different subspaces. Feature vectors from all heads are concatenated as follows:

(3)

where H denotes the number of attention heads. Although interactions between features provide us important information, we should also preserve features’ original characteristics. We add original latent vectors to new vectors after multi-head attention

by residual connections as follows:

(4)

where max(

,0) denotes the Rectifier (ReLU) activation function. Given new representations of features

, we map user feature representations to latent vectors of user features, and we map item feature representations to latent vectors of item features as follows:

(5)
(6)

where , denotes a fully connected layer.

We use the dot product to model the interactions between features as in the latent factor models. The scores with feature representations of users and items can be considered as the rating predictions based on the contents. The content score is calculated as follows: .

Figure 2. Self-Attentive Integration Network.

2.2. Information Integration Layer

The information integration layer adaptively combines the feedback and content information of users and items using attention mechanism which has proven to be effective (Chen et al., 2018; Shi et al., 2018). The importance of each type of information varies according to the circumstances. If a system has a considerable amount of user interaction data, we should give more weight to feedback information. On the other hand, if a user or item is relatively new in the system, we should give more weight to feature information.

We use the outputs of the feature-level interaction layer as content representations. To obtain information from user-item feedback, we train an independent feedback latent vector. The preference score is calculated as where is a user feedback latent vector and is an item feedback latent vector. We used attention mechanism to combine the two latent vectors as follows:

(7)

where denotes a fully connected layer. By combining the vectors using Eq. (8), we obtain the final user and item representations.

(8)

2.3. Output Layer

Our final prediction is based on the user and item representations from the information integration layer. The combined score is calculated as where and are user and item combined latent vectors (feature representation vector + feedback latent vector), respectively. We used Root Mean Squared Error (RMSE) as our metric and reported the Mean Absolute Error (MAE) scores in the Experiments section 3.4.

(9)

We trained our parameters to minimize the RMSE for all three scores (Content, Preference, Combined). By jointly minimizing all three losses, feedback and feature latent vectors are trained independently so they can be used to obtain content and feedback information. Our model finds ways to effectively integrate the two types of information. In the evaluation stage, we used only the combined score to predict ratings.

3. Experiments

In this section, we compare our proposed SAIN model with various baseline models to show the effectiveness of our model.

3.1. Datasets

We conducted experiments on two public datasets. As our work focuses on the importance of interactions between multiple features, we picked datasets with user and item features. The statistics of the datasets are summarized in Table 1. The number of ratings ranges from 1 to 5 in both datasets and users with less than 5 ratings were removed. Also, both datasets were split into training, validation, and test sets with a 8:1:1 ratio. A detailed description of each dataset is given below:

(1) MovieLens. This includes user ratings on movies and is one of the most popular datasets in the recommendation field. The dataset was originally constructed by Grouplens111https://grouplens.org/datasets/movielens. Other than user-item interactions, we leveraged features such as age and gender for users and genres for movies. In addition to the genre information included in the MovieLens dataset, we collected and used the director and actor information of each movie from IMDB222https://www.imdb.com/interfaces/.

(2) Weibo-Douban. The dataset contains data from the Web services Weibo and Douban. The dataset was first introduced in (Yang et al., 2018). Weibo is a Twitter-like service used in China and Douban is a chinese review website for movies and books. The dataset contains user ratings on movies collected from Douban and user information from Weibo. The authors collected user tags and used them as user features. Douban also provides movie tags that are labeled by users. As there are approximately 50000 tags for items and users, we used only the 50 most frequently used tags for items and users respectively.

Dataset User Item Ratings Sparsity
MovieLens 944 1,683 10,0000 93.70%
Weibo-Douban 5,000 37,993 870,740 99.54%
Table 1. Statistics of Datasets

3.2. Baselines

We compared our model with various LF models. We used the official implementation of the baseline models when possible. For the baseline models NFM and xDeepFM, we used Deep CTR’s implementation333https://deepctr-doc.readthedocs.io/en/latest/index.html.

Biased MF (Koren et al., 2009). This is the most basic MF model which uses only user-item interaction data, and user and item biases. Biased MF is widely used as a baseline in recommendation tasks, especially rating prediction tasks.

NCF (He et al., 2017).

Neural collaborative filtering is a deep learning based MF model. Most of the deep learning approaches focus on improving the shallow structures of collaborative filtering models, which are similar to the structures of NCF models.

AFM (Chen et al., 2018). Attention-driven factor model is a state-of-the-art deep learning model that uses both content features and user-item feedback. AFM uses attention mechanism to model user different preferences on for item content features.

LibFM (Rendle, 2010). We used the official implementation of Factorization Machine (FM) released by the authors.

NFM (He and Chua, 2017). This is one of the first deep neural network based FM models. The model is comprised of FM’s linear structure and a non linear deep neural network structure; the model captures higher order feature interactions.

xDeepFM (Lian et al., 2018). State-of-the-art deep learning model that captures high-order feature interactions. We didn't include other recent FM based deep learning models such as Wide&Deep or Deep&Cross as xDeepFM outperforms those models in various settings.

3.3. Implementation Details

We implemented our model in PyTorch. We used an embedding dimension of 64 for all the MF models and our model. For the FM models, we used an embedding dimension of 16 as using a smaller embedding size is more effective. The Adam optimizer with a learning rate of 1e-3 and l2-regularization rate of 1e-4 is used for all the models. For the multi-head self-attention layer in SAIN, we used one self-attention layer and two attention heads. To prevent overfitting, we add a batch normalization layer after the self-attention layer, and apply dropout with a ratio of 0.1. Our source code is available at

https://github.com/sigir2019-sain/SAIN.

3.4. Performance Comparison

We summarized our experimental results in Table 2. Our experimental results show that the MF models obtain higher performance on datasets with more user-item interactions (e.g. MovieLens dataset). The FM models obtained higher performance on the Weibo-Douban dataset, which is relatively sparser. In terms of RMSE, our model outperforms the best performing baseline model by 2.13% and 1.01% on MovieLens and Weibo-Douban, respectively. While considering complex feature interactions, our model SAIN can effectively learn feature representations and integrate content and feedback information. As a result, SAIN achieves the highest performance on both datasets.

Dataset Movie-Lens Weibo-Douban
Models RMSE MAE RMSE MAE
BiasedMF 0.9132 0.7165 0.7933 0.6169
NCF 0.9040 0.7150 0.7805 0.6158
AFM 0.9126 0.7175 0.7717 0.6034
LibFM 0.9234 0.7252 0.7755 0.6141
NFM 0.9134 0.7189 0.7744 0.6111
xDeepFM 0.9104 0.7155 0.7730 0.6077
SAIN 0.8847 0.6958 0.7639 0.5957
Table 2. Experimental results on the MovieLens and Weibo-Douban datasets.

3.5. Analysis

3.5.1. Effect of Multi-head Self-attention

As stated in 2.1, we use a multi-head self attention structure in the feature-level interaction layer. We checked the attention scores of features to see how multiple features influence each other. The visualization of the attention scores is provided in Figure 3. Example (a) in Figure 3 shows that two movies in the same thriller genre can have very different attention patterns. One movie focuses on an actor (Bruce Willis) and the other movie focuses on a director (Quentin Tarantino). Furthermore, we could verify that each head captures different aspects of feature interactions. Example (b) in Figure 3 shows the attention score of a same movie can have very different patterns in different attention head. One focused on user features (gender, occupation), whereas the other one concentrated on item features (actor, director).

Figure 3. Attention score visualization. (a) Comparison of attention scores of two movies(same genre). (b) Shows the attention scores of two different heads for the same movie

3.5.2. Effect of Top-K features.

We used only features with the K highest attention scores to avoid overfitting. The changes in performance were observed while varying K. As demonstrated in Figure 4, the prediction accuracy increases when K is 8.

Figure 4. Changes in performance while varying K

4. Conclusion

In this paper, we proposed Self-Attentive Integration Network (SAIN) which is a model that effectively combines user-item feedback information and auxiliary information for recommendation tasks. We conducted experiments on the well-known datasets MovieLens and Weibo-Douban. The experimental results show that our model outperforms the state-of-the-art LFM models.

Acknowledgement

This work was supported by the National Research Foundation of Korea (NRF-2017R1A2A1A17069645, NRF-2017M3C4A7065887)

References

  • (1)
  • Chen et al. (2017) Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. 2017. Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. In Proceedings of the 40th International ACM SIGIR. ACM, 335–344.
  • Chen et al. (2018) Liang Chen, Yang Liu, Zibin Zheng, and Philip Yu. 2018. Heterogeneous Neural Attentive Factorization Machine for Rating Prediction. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). ACM, 833–842. https://doi.org/10.1145/3269206.3271759
  • He and Chua (2017) Xiangnan He and Tat-Seng Chua. 2017. Neural factorization machines for sparse predictive analytics. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 355–364.
  • He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 173–182.
  • Koren et al. (2009) Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 8 (2009), 30–37.
  • Lian et al. (2018) Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, and Guangzhong Sun. 2018. xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems. arXiv preprint arXiv:1803.05170 (2018).
  • Rendle (2010) Steffen Rendle. 2010. Factorization machines. In Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 995–1000.
  • Shi et al. (2018) Shaoyun Shi, Min Zhang, Yiqun Liu, and Shaoping Ma. 2018. Attention-based Adaptive Model to Unify Warm and Cold Starts Recommendation. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM ’18). ACM, 127–136. https://doi.org/10.1145/3269206.3271710
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. 5998–6008.
  • Yang et al. (2018) Deqing Yang, Lihan Chen, Jiaqing Liang, Yanghua Xiao, and Wei Wang. 2018. Social Tag Embedding for the Recommendation with Sparse User-Item Interactions. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, 127–134.