DeepAI
Log In Sign Up

Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention

05/28/2021
by   Yongji Wu, et al.
Duke University
USTC
0

Sequential recommendation plays an increasingly important role in many e-commerce services such as display advertisement and online shopping. With the rapid development of these services in the last two decades, users have accumulated a massive amount of behavior data. Richer sequential behavior data has been proven to be of great value for sequential recommendation. However, traditional sequential models fail to handle users' lifelong sequences, as their linear computational and storage cost prohibits them from performing online inference. Recently, lifelong sequential modeling methods that borrow the idea of memory networks from NLP are proposed to address this issue. However, the RNN-based memory networks built upon intrinsically suffer from the inability to capture long-term dependencies and may instead be overwhelmed by the noise on extremely long behavior sequences. In addition, as the user's behavior sequence gets longer, more interests would be demonstrated in it. It is therefore crucial to model and capture the diverse interests of users. In order to tackle these issues, we propose a novel lifelong incremental multi-interest self attention based sequential recommendation model, namely LimaRec. Our proposed method benefits from the carefully designed self-attention to identify relevant information from users' behavior sequences with different interests. It is still able to incrementally update users' representations for online inference, similarly to memory network based approaches. We extensively evaluate our method on four real-world datasets and demonstrate its superior performances compared to the state-of-the-art baselines.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/18/2021

Dynamic Memory based Attention Network for Sequential Recommendation

Sequential recommendation has become increasingly essential in various o...
08/08/2022

Sparse Attentive Memory Network for Click-through Rate Prediction with Long Sequences

Sequential recommendation predicts users' next behaviors with their hist...
05/28/2018

Perceive Your Users in Depth: Learning Universal User Representations from Multiple E-commerce Tasks

Tasks such as search and recommendation have become increas- ingly impor...
05/02/2019

Lifelong Sequential Modeling with Personalized Memorization for User Response Prediction

User response prediction, which models the user preference w.r.t. the pr...
02/18/2021

Multi-Interest-Aware User Modeling for Large-Scale Sequential Recommendations

Precise user modeling is critical for online personalized recommendation...
04/26/2019

Hierarchical Context enabled Recurrent Neural Network for Recommendation

A long user history inevitably reflects the transitions of personal inte...
08/20/2018

Self-Attentive Sequential Recommendation

Sequential dynamics are a key feature of many modern recommender systems...

1. Introduction

With the rapid development of e-commerce businesses such as online shopping, display advertisement, and video streaming services, a huge amount of behavior data has been produced by users. This provides us with great opportunities to utilize the increasingly richer sequential behavior data, and learn better user representations for a variety of tasks such as CTR prediction and recommendation (Ren et al., 2019; Li et al., 2019; Zhou et al., 2018). The value of rich behavior history (long behavior sequences) has been demonstrated in (Ren et al., 2019) for user modeling. We focus on the sequential recommendation task (Kang and McAuley, 2018; Li et al., 2019; Cen et al., 2020), which plays a vital role in various online services, including the aforementioned ones.

However, modeling long sequences in sequential recommendation is not a trivial task, and it poses a series of great challenges for us. The traditional sequential recommendation models need to store the whole sequence and perform inference over it (Covington et al., 2016; Zhou et al., 2018, 2018; Xiao et al., 2020). These methods suffer from immense computational and storage cost as it scales linearly with the sequence length. The strict storage and latency constraints have therefore hampered us from employing long sequences for real-time inference in online systems (Pi et al., 2020). Furthermore, as the user behavior sequence gets longer, it is more likely to contain items belonging to different categories, revealing the multi-facet nature of users’ interests (Ren et al., 2019; Pi et al., 2020). The diverse interests of users also contribute to the complex temporal dynamics in long behavior sequences, where various temporal dependency patterns may exhibit (Ren et al., 2019). If we fail to identify the different interests manifested in the user’s behavior history, we would instead be overwhelmed by noise brought by irrelevant behaviors using long sequences.

Recently, a few pioneering studies have been conducted to address this issue under the click-through rate (CTR) prediction task. (Ren et al., 2019) first formulates the problem of lifelong sequential modeling. It borrows the idea of memory networks from the NLP domain, where a personalized behavior memory is maintained for each user to memorize his/her behavior sequences. The RNN-based hierarchical memory serves as the user representation and can be incrementally updated in constant time when the user clicks a new item. Pi et al. (2019) propose another memory-based framework which adopts a different memory update strategy from NTM (Graves et al., 2014). Both of the two methods rely on the same set of RNN parameters to update the memory for each user. Therefore, they fail to adapt to each individual user’s personalized behavioral transition patterns, as different users’ behaviors have different temporal dynamics. Besides, RNN-based models pass sequential information through hidden states, they tend to forget about the past (Bengio et al., 1994). Though architectures like LSTM (Bengio et al., 1994), GRU (Cho et al., 2014) and NTM (Graves et al., 2014) have been proposed to alleviate this problem, they still suffer from the path length between positions and fails to capture long-term dependencies if the sequence is too lengthy (Kang and McAuley, 2018; Vaswani et al., 2017).

Search-based methods are introduced in (Pi et al., 2020; Qin et al., 2020) for CTR prediction. They consider that the whole behavior history contains much noise for the sequential models to directly make use of. Instead, they propose search strategies (a hand-crafted one in (Pi et al., 2020) and a model-based one in (Qin et al., 2020)) to retrieve only the relevant sub-sequence for the recommender to model. However, these methods utilize the target item to perform the search. Therefore, they cannot be applied to the sequential recommendation task, or the matching stage of recommendation systems, where target items are not available as model inputs.

The self-attention presented in Transformers (Vaswani et al., 2017) has seen incredibly successful in many sequential modeling tasks across a variety of fields, such as machine translation (Xu et al., 2020; Devlin et al., 2018; Dai et al., 2019)

and computer vision 

(Wang et al., 2018; Parmar et al., 2018; Bello et al., 2019). The self-attention mechanism uses the sequence to attend to itself. It adaptively assigns different weights to different positions of the sequence according to their relative importance, and then aggregates the sequence using the computed weights. Through this way, we can filter out the irrelevant behaviors in the user’s historical sequence (Kang and McAuley, 2018). Compared to RNNs, the self-attention structure has a maximal path length of . This consequently facilitates self-attention in learning long-term dependencies.  (Vaswani et al., 2017) Self-attention networks have also demonstrated superior performances in sequential recommendation (Sun et al., 2019; Kang and McAuley, 2018; Lian et al., 2020; Zhang et al., 2018), which greatly outperform the RNN/CNN-based approaches.

Despite self-attention’s triumph in sequential recommendation, and its intrinsic advantages in identifying relevant information from long sequences and capture long-term dependencies, we find that it is quite difficult to apply it to users’ lifelong sequences in the online inference setting, since it bears the same aforementioned problem of linear computational and storage cost. Therefore it is not sufficient for online inference as we require the cost to be constant (Ren et al., 2019; Pi et al., 2019). In the vanilla self-attention, when a new click action is generated by the user, we have to recompute the user representation using the entire sequence appended with the new action. Recently, linear self-attention methods have been introduced (Katharopoulos et al., 2020; Choromanski et al., 2020), where (linear) dot products of kernel feature maps are used as attention weights. We find that in our sequential recommendation scenario, the linear attention mechanism empowers us to perform incremental attention over the user’s sequence. Through it, we can incrementally update the result of the self-attention operation as new click actions coming from users in constant time. This frees us from the need to store the user’s whole historical sequence and enables us to perform real-time online inference, while still enjoys the benefits brought by the vanilla self-attention mechanism.

We observe that the attention mechanism can be considered as a soft-search operation where more relevant positions to a given query are assigned larger weight, hence they play more dominant roles in weighted aggregation. The attention mechanism therefore, naturally empowers us to capture users’ multi-facet interests from their behavior sequences. As different behaviors of users are triggered by different underlying interests, we could utilize the attention mechanism to soft-search over the user’s lifelong sequence and extract sub-sequences related to different interests of the user. Once again, to address the computational and storage inefficiency of the vanilla attention in online inference, we design a novel multi-interest extraction module under our incremental attention framework. Combined with the incremental self-attention blocks introduced in the previous paragraph, we present our novel lifelong incremental multi-interest self attention based sequential recommendation model, namely LimaRec.

We summarize our contributions as follows:

  • [leftmargin=*]

  • We propose a novel incremental self-attention based method for lifelong sequential recommendation, which goes beyond the limitations of RNN-based memory networks, while also possesses their ability to incrementally update the user representation for online inference.

  • Under the same incremental attention framework, we further propose a novel multi-interest extraction module to soft-search the whole sequence for the sub-sequences related to various interests of users.

  • We conduct extensive experiments on four real-world datasets and obtain superior performance than state-of-the-art baselines. We also empirically investigate how does the performance of different methods vary on user sequences with increasing length.

2. Related Work

2.1. Sequential Recommendation

Early works on sequential recommendation usually utilize Markov chains (MCs) to capture the sequential patterns from users’ sequential behaviors. FPMC 

(Rendle et al., 2010) fuses MCs and matrix factorization to capture both long-term preferences and short-term transitions. Higher-order MCs are used to consider more previous items (He and McAuley, 2016; He et al., 2017)

. With the advent of deep learning, deep sequential models like recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have been extensively investigated in recent years and achieved great success 

(Hidasi et al., 2015; Hidasi and Karatzoglou, 2018; Tang and Wang, 2018; Chen et al., 2018; Li et al., 2017; Liu et al., 2016; Yu et al., 2016).

Recently, self-attention networks (Vaswani et al., 2017) have demonstrated superior performance in sequential recommendation due to full parallelism and the capacity to capture long-range dependencies. (Kang and McAuley, 2018) uses a binary cross-entropy loss based on inner product scores to train a self-attention network, while Zhang et al. (2018) propose to optimize a triplet margin loss based on Euclidean distance preference. The Cloze objective, which is proposed for training BERT (Devlin et al., 2018), is used to improve the performance sequential recommendation in (Sun et al., 2019). However, these methods have to model the whole behavior sequence to perform inference and have a computational cost linear to the sequence length. Thus they can only be used to handle recent user behaviors in online inference.

2.2. Lifelong Sequential Modeling

The problem of lifelong sequential modeling is first introduced in (Ren et al., 2019) for CTR prediction, where the authors propose a hierarchical periodic memory network (HPMN) to memorize users’ behavior sequences. HPMN maintains hierarchical memories for each user to retain the information contained in his/her lifelong sequence. The memories are updated with different periods to capture the multi-scale sequential patterns of users. (Pi et al., 2019) is another method utilizing memory networks. Different from (Ren et al., 2019), which employs GRU to update the memory, (Pi et al., 2019) adopts the neural turning machine from (Graves et al., 2014)

. They further propose a memory utilization regularization strategy to reduce the variance of update weights across different memory slots, as well as a memory induction unit to capture high-order information. Compared to self-attention networks, RNNs have limited ability to capture long-term dependencies 

(Vaswani et al., 2017). This restrains the performance of the methods which rely on RNN-based memory networks.

On the other hand, (Pi et al., 2020) and (Qin et al., 2020) propose to address the issue from a different perspective. They argue that since it is computationally expensive to model the whole sequence, and there is much noise in the user’s lifelong sequence, we should search for a limited number of the most relevant and appropriate user behaviors. The retrieved behaviors are then fed into a network to generate the final prediction. (Pi et al., 2020) uses the category ID of items to find most relevant behaviors, while (Qin et al., 2020)

learns a search strategy with reinforcement learning. However, both of the two methods are designed for CTR prediction, they rely on the information of the target item to perform the search. Since in sequential recommendation, the target item is not available in advance, these methods fail to adapt to our setting.

2.3. Modeling Users’ Diverse Interests

The limit of using a single fixed-length representation to express users’ multi-facet interests has been pointed out in (Zhou et al., 2018), where they introduced a local activation unit to adaptively learn the representations of user interests from behavior history, with respect to a given ad. DIEN (Zhou et al., 2019) proposed to capture the interest evolving process of users with an interest extractor layer and an interest evolving layer, which use GRU as building blocks. A behavior refine layer is introduced in (Xiao et al., 2020) to capture better user historical item representations for multi-interest extraction. The aforementioned methods are all devised for the CTR prediction task.

Distinguished from the multi-interest models for CTR prediction, (Li et al., 2019) proposes to model users’ diverse interests in the sequential recommendation setting. They design a multi-interest extraction layer based on dynamic capsule routing from (Sabour et al., 2017). A different routing logic is used in (Cen et al., 2020) to extract interests, while another extraction method based on attention mechanism is also introduced. However, none of these methods is applicable in lifelong sequential recommendation, as they have a computational cost that scales linearly with the sequence length, while online inference requires costs.

3. Methodology

3.1. Preliminaries

Notation Description
The -th item clicked in the user’s historical sequence.

The embedding vector of the

-th item of the sequence.
The positional embedding of the -th position.
The stacked latent representation of the sequence at each position after the -th self-attention block.
The latent representation up to the -th position (i.e., the -th row of ).
The stacked output of the -th self-attention sub-layer (just before FFN).
The -th row of .
The key/query/value projection matrices of the -th self-attention sub-layer.
The maintained hidden states for incremental attention of the -th self-attention block at -th step.
The key/value projection matrices of the multi-interest extraction module.
The maintained hidden states for the multi-interest extraction module at the -th step.
The trainable vector to encode the incentive behind the -th interest.
The disentangled representation of the sequence under the -th interest at the -th step.
Table 1. Important notations.

3.1.1. Problem Formulation

For each user , we have an ordered historical sequence of items that has already interacted with (e.g., clicked), where is the current length of ’s historical sequence and is the index of the -th item clicked by . For simplicity, we omit the superscript from now. In the lifelong sequential recommendation setting, we consider that the user’s sequence is dynamically updated according to the user’s latest actions in the online serving scenario. The user may have clicked a new item , which is appended to his/her historical sequence . Now we have the updated sequence . We then use this sequence to predict the next item (or the next set of items) might click. We consider the matching (candidate generation) stage of industrial recommendation systems (Covington et al., 2016; Li et al., 2019; Cen et al., 2020), where we compute a preference score of user for each item based on the inner product similarity between the latent representation of ’s updated historical sequence (which includes ) and the embedding vector of . The preference scores are then sorted to retrieve the top- items as the candidate set for . When the user generates more actions , they are appended to the user’s historical sequence, and the same recommendation procedure is repeated for online inference with respect to the user’s real-time evolving behaviors.

Since we are handling with lifelong user sequences which might contain thousands of or even more behaviors, it would be infeasible to directly model the whole expanding user sequence in real-time online inference, due to the tremendous computational and storage costs. In addition, with the increasing sequence length, it becomes more and more difficult to precisely capture the complex multi-interests of users and the intricate temporal dynamics of users’ evolving behavior patterns, without overwhelmed by massive noise. Therefore, lifelong sequential recommendation is a non-trivial task facing these challenges. To tackle these issues, we require the recommender to have the following properties under the lifelong sequential recommendation setting:

  • [leftmargin=*]

  • The recommender should maintain a latent representation for each user’s history in a fixed size (constant with respect to the sequence length), instead of storing the whole historical sequence.

  • The maintained representation should efficiently preserve the behavior information contained in the user’s lifelong sequence. It should also capture the intrinsic multiple interests of the user and the involved temporal dynamics of the user’s behaviors.

  • The recommender should be able to continuously (incrementally) update the user representation to adapt to the new click behaviors generated by the user under this online setting.

Note that our task of lifelong sequential recommendation is different from that in (Pi et al., 2019; Ren et al., 2019), where they focus on the CTR prediction task, in which the target item is used as model input. However, the target item is not required in our setting.

3.1.2. Sequential Recommendation with Self-Attention

Kang and McAuley (2018) introduce the SASRec model for sequential recommendation, which borrows the powerful self-attention mechanism from (Vaswani et al., 2017). Self-attention allows SASRec to identify the relevant behaviors in a user’s history for next item prediction, through aggregating the sequence using adaptive weights.

In SASRec, two self-attention blocks are stacked to encode the user sequence. The user’s historical sequence is first transformed into a sequence of embedding vectors , where is the sum of the embedding vector of the -th item and a trainable positional embedding . We then stack the embedding vectors into a matrix . Self-attention is performed on , using each element of the sequence as query to attend to the sequence itself. Concretely, we compute the following output:

(1)
(2)

where are the trainable projection matrices.

After the self-attention sub-layer, the output is fed to a position-wise Feed-Forward Network (FFN) sub-layer, which is applied to each position in the sequence independently:

(3)

where and are the parameters of the FFN sub-layer.

Subsequently, a second self-attention block is applied using as input. A self-attention sub-layer performs the same computation as Eq. 2 on , but with a different set of projection weights . The result then goes through another FFN sub-layer to produce the final output , where is the latent representation of the user’s historical sequence up to the

-th position. Residual connection and layer normalization are also used in SASRec to stabilize the network training.

We use as the up-to-date user representation (since it encodes the entire historical sequence ), and compute its inner product with as the preference score for any item .

3.2. Modeling the Lifelong Sequence with Incremental Attention

Through using the self-attention mechanism to both capture the long-term dependencies in the user’s historical sequence, and focus only on the informative (relevant) part to reduce noise, SASRec outperforms state-of-the-art MC/CNN/RNN-based methods. However, a fatal problem arises naturally when we try to apply SASRec to lifelong sequences. Assume the user has clicked a new item , then we have to update the user representation by computing the latent representation of the sequence appended with . For the first self-attention block, we need to use as query to attend to every position of the sequence:

(4)
(5)

Since we use softmax to normalize the scores, we have to compute the inner products (after projection) between and for , take the exponents, and then normalize them so that they are all positive and sum up to 1. Therefore, we must store the whole sequence in order to compute these inner products and perform the weighted average over this sequence. The same applies to the second self-attention block. This leads to a computational and storage cost of , which is prohibitive in the online setting of lifelong sequential recommendation. Note that when the new action arrives, we do not have to recompute (where ), since we prevent previous positions from attending to subsequent positions.

Recently, linear self-attention methods were introduced in to reduce the computational complexity of the self-attention operation from quadratic to linear. In these methods, the exponent similarity score (softmax) is replaced with kernels, the inner product of kernel feature maps is used as the similarity score. Given queries, keys and values . The -th output is then computed as follows:

According to the associativity of matrix multiplication, we can write it in the following form:

Thus, we just need to compute the summations in the numerator and denominator once, and they can be repeatedly used for all quires.

Choromanski et al. (2020) proposes the random feature map, , to serve as an unbiased and low-variance approximation of the softmax self-attention. Here the vectors are drawn i.i.d. from .

In our setting, we restrict the -th position to attend only to all the positions that . Therefore, can be computed as:

(6)

We find that this restriction also empowers us the ability to perform attention over the whole sequence in an incremental manner. We only need to maintain two hidden states. When the sequence is updated with a new action , we can compute the result of using to attend to the historical sequence from and the hidden states alone, without the need to store the entire sequence. Concretely, we maintain and incrementally update two hidden states and for the -th self-attention block () according to the following rules:

(7)
(8)

Here are the states of at the -th step (which considers the historical sequence up to the -th action ). In this way, the output of the attention at the -th position, which uses to query the sequence), can be computed as follows:

(9)

As we can see, the computation of depends only on the current output from previous layer (note that is the embedding of ) and the up-to-date hidden states . The user sequence before the -th position becomes irrelevant, and we can discard it. The output of the FFN sub-layer at each self-attention layer can be computed solely from . Therefore, for each user, the two hidden states of each self-attention block are all we need to store, and to dynamically update when a new action comes from the user. Since we use two self-attention blocks in our model, we need to maintain four hidden states in total for each user. Note that and . Although is a matrix, we have in our scenario, hence the cost to store the hidden states is negligible compared to the cost to store the whole sequence. Through maintain fixed-size states, we reduce the computational and storage complexity from to in online inference with newly arrived actions.

Compared to the RNN-based memory networks (Pi et al., 2019; Ren et al., 2019), which utilizes trainable gates to update the memory, our incremental attention based method aggregate all the positions of the historical sequence weighted by their ”importance”. The (unnormalized) score assigned to the -th position is: . Self-attention possess the ability to filter out irrelevant part of the historical sequence (by assigning negligible weights) (Kang and McAuley, 2018), and our method enables it in an incremental fashion. This is particularly helpful for lifelong sequences, where the richer historical behavior data also leads to more noise. On the other hand, RNN-based methods may get overwhelmed by noise when modeling long sequences. Besides, they use the same set of RNN parameters to update the memory for each user. However, different users’ behaviors have distinct transition patterns, using shared parameters fails to adapt to the personalized transition pattern of each individual user.

3.3. Extracting Users’ Multi-Facet Interests

With the increasing length of the user’s historical sequence, the more diverse the interests may contain in this sequence. Hence, if we directly model the whole sequence with a single representation, we cannot capture the multiple interests that the user has. Instead, we may got entangled with the various interests and fail to correctly identify the different driving forces behind the user’s each action. Therefore, ideally, we should disentangle the user’s whole sequence into multiple sub-sequences, each containing only the part related to a specific interest.

Coincidentally, we observe that the attention mechanism empowers us to complete this task. Computing the attention using the sequence itself as keys and values can be thought as performing a soft-search over the sequence with the given query, where the positions relevant to the query are assigned with larger weights, and they will dominate in the weighted average step. Therefore, the attention mechanism has the inherent capability to soft-extract the sub-sequences related to a query, and we use it as the building block of our proposed multi-interest extraction module.

We may characterize the various interests into a common collection that is shared among all the users, and each individual user may manifest a subset of it in his/her historical sequence. We assume the collection consists of potential interests of users, where is a tunable hyper-parameter. For each interest, we represent it with a trainable model parameter , which encodes the underlying incentive behind this interest. is used to query the user sequence (after passing through the two linear self-attention blocks):

(10)

Here represents the disentangled representation of the sequence at the -th position (which encodes ) under the -th interest. are the projection weights. Through linear attention, once again, we can incrementally update , like what we do for the self-attention blocks in Section 3.2. Since we use the same matrices to project keys and values for each interest, we only need to maintain two hidden states, which are shared among all the interests. The two states and are updated in the following manner:

(11)
(12)

Armed with the up-to-date and , we are enabled to compute as in time with each incoming new behavior from the user, similar to Eq. 9. The up-to-date set of vectors constitutes the final multi-interest representations of the current user sequence.

3.4. Regularizing the Multi-Interest Representations

The multi-interest extraction module we proposed in Section 3.3 conducts a soft search over the whole user sequence to retrieve the ”sub-sequence” relevant to each interest. We would like the final representations to be disparate so that they can indeed capture the diverse interests of users. Otherwise, they may contain redundant information and may collapse to encode overlapping interests. Therefore, we should enforce to distill distinct information from the user sequence. We reckon that the target item (next item) should only be inferred from a single (corresponding to the one with the largest inner product score), since each user behavior should only be triggered by a single specific interest. Therefore, we propose the following regularization loss:

(13)

where and is the embedding of the target item. For the sequential recommendation task, we apply the binary cross-entropy loss:

(14)

Here we still denote , and is a negative sampling distribution (we simply use a uniform one as in (Kang and McAuley, 2018)).

The final loss function for a user is then given by:

(15)

where

is a hyperparameter controlling the weight of the regularization loss.

4. Experiments

Dataset #users #items #actions avg. length
ML-1M 6,040 3,416 1M 165.50
ML-25M 162,542 36,728 25M 153.40
XLong 20,000 748,471 25M 796.40
Industrial 430,311 7,093,352 161M 374.73
Table 2. Dataset statistics.

In this section, we empirically analyze the effectiveness of our proposed method and present the experimental results. We conduct the experiments in order to answer the following research questions:

  • Does modeling lifelong sequences really contribute to the recommendation performance? How does LimaRec compare with state-of-the-art baselines?

  • How does the performance of different sequential models vary as the length of users’ historical sequences increases?

  • Is it essential to consider the multi-facet interests of users in lifelong sequences? How does the number of interests we model influence the recommendation performance?

4.1. Datasets

We employ the following four real-world datasets to conduct experiments:

  • [leftmargin=*]

  • MovieLens (Harper and Konstan, 2015): It is a series of frequently used datasets of movie ratings for evaluating recommendation systems. We use the two versions: MovieLens 1M (ML-1M) and MovieLens 25M (ML-25M), which include 1 million and 25 million ratings, respectively.

  • XLong (Ren et al., 2019): The first public dataset tailored for lifelong sequential modeling. This dataset is collected from Alimama’s online advertising platform, and contains relatively longer behavior sequences for each user.

  • Industrial: A dataset sampled from user click logs on a top e-commerce platform within a period of one year.

The statistics of the datasets are shown in Table 2

4.2. Competitors

We compare the proposed LimaRec with a variety of baselines. The first group includes the state-of-the-art lifelong sequential modeling models:

  • [leftmargin=*]

  • MIMN (Pi et al., 2019) utilizes a memory network based on NTM (Graves et al., 2014) to memorize the user’s historical sequence. It improves the traditional NTM with a memory utilization regularization strategy and a memory induction unit.

  • HPMN (Ren et al., 2019) enhances memory networks with a hierarchical architecture with multiple update periods to mine multi-scale patterns in users’ behavior sequence.

Note that these two methods are all developed for the CTR prediction task, where the personalized memory representation is concatenated with the feature vector of the target item and some user-side features. The concatenated vector is then fed into a multi-layer perceptron to produce the estimation of the user response probability. Since in our setting the target item is not available as model input, instead, we generate the candidate set by computing the inner product scores between the user’s representation and the items. Hence, we modify MIMN and HPMN accordingly.

The second group contains the sequential recommendation methods designed to capture users’ multiple interests:

  • [leftmargin=*]

  • MIND (Li et al., 2019) borrows the idea of dynamic capsule routing from (Sabour et al., 2017). Different from (Sabour et al., 2017)

    , it employs shared bi-linear mapping matrix and randomly initialized routing logits.

  • ComiRec (Cen et al., 2020) introduces two versions of multi-interest extraction modules. The first one, ComiRec-DR, follows the original routing mechanism in (Sabour et al., 2017). The second one, ComiRec-SA, explores a self-attentive mechanism. The attention mechanism is different from the scaled dot-product attention in SASRec.

The last group is composed of:

  • [leftmargin=*]

  • GRU4Rec (Hidasi et al., 2015) is the first work that introduces RNNs to model user sequential behaviours. It is originally designed for session-based recommendation.

  • SASRec (Kang and McAuley, 2018), the vanilla self-attention networks described in Section 3.1.2.

ML-1M ML-25M XLong
(lr)2-5 (lr)6-9 (lr)10-13 HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10
GRU4Rec-40 0.2646 0.1847 0.3805 0.2221 0.8206 0.6526 0.9047 0.6801 0.1328 0.0820 0.1603 0.0910
GRU4Rec-1000 0.2576 0.1775 0.3770 0.2158 0.8141 0.6484 0.8967 0.6754 0.1326 0.0822 0.1673 0.0935
ComiRec-DR-40 0.4192 0.2945 0.5760 0.3451 0.8964 0.7526 0.9296 0.7637 0.4005 0.2625 0.5916 0.3241
ComiRec-DR-1000 0.3997 0.2777 0.5531 0.3271 0.8342 0.6657 0.8960 0.6860 0.2195 0.1627 0.2842 0.1834
ComiRec-SA-40 0.3901 0.2651 0.5531 0.3175 0.8094 0.6239 0.8992 0.6534 0.2977 0.1963 0.4898 0.2578
ComiRec-SA-1000 0.4169 0.2874 0.5765 0.3389 0.8189 0.6404 0.9046 0.6685 0.2306 0.1555 0.3712 0.2004
MIND-40 0.5232 0.3624 0.6911 0.4169 0.8783 0.7261 0.9331 0.7440 0.2106 0.1317 0.2607 0.1481
MIND-1000 0.5331 0.3693 0.6995 0.4233 0.8710 0.7229 0.9325 0.7429 0.1844 0.1205 0.2350 0.1367
MIMN 0.2414 0.1594 0.3747 0.2023 0.7878 0.5997 0.8902 0.6332 0.1966 0.1334 0.2385 0.1470
HPMN 0.4561 0.3168 0.6149 0.3682 0.9283 0.8270 0.9559 0.8359 0.3700 0.2599 0.4796 0.2944
SASRec-40 0.6631 0.5026 0.7772 0.5397 0.9265 0.7936 0.9710 0.8083 0.3337 0.2359 0.4657 0.2783
SASRec-1000 0.6954 0.5428 0.8043 0.5786 0.9325 0.7982 0.9755 0.8124 0.6805 0.5282 0.8075 0.5675
LimaRec 0.8699 0.6774 0.9606 0.7072 0.9699 0.8717 0.9867 0.8772 0.8207 0.6682 0.8931 0.6918
Table 3. Overall recommendation performance on MovieLens and XLong. The suffix ”-40” and ”-1000” indicates whether the method is trained and evaluated with lifelong sequences or using only recent behaviors. The ones without suffix use lifelong sequences.
HR@5 NDCG@5 HR@10 NDCG@10
GRU4Rec-40 0.1772 0.1241 0.2258 0.1399
GRU4Rec-1000 0.1744 0.1274 0.2247 0.1436
ComiRec-DR-40 0.4616 0.3361 0.6145 0.3856
ComiRec-DR-1000 0.4158 0.3073 0.5523 0.3514
ComiRec-SA-40 0.4502 0.3284 0.5985 0.3761
ComiRec-SA-1000 0.2906 0.1964 0.4263 0.2401
MIND-40 0.2220 0.1580 0.2766 0.1758
MIND-1000 0.2443 0.1835 0.3096 0.2047
MIMN 0.2665 0.2003 0.3476 0.2265
HPMN 0.3578 0.2961 0.4262 0.3181
SASRec-40 0.4725 0.3644 0.5782 0.3986
SASRec-1000 0.7701 0.6349 0.8610 0.6644
LimaRec 0.8000 0.6555 0.8911 0.6851
Table 4. Overall recommendation performance on the Industrial dataset.

4.3. Settings

We set the maximum length of users’ lifelong sequences to 1000, as in (Ren et al., 2019), on all four datasets. For the methods that are not specially designed for lifelong sequential modeling, we also evaluate them using only recent 40 behaviors as users’ history. In other words, they are trained and evaluated on sub-sequences with length up to 40, and user sequences with more than 40 behaviors are divided into non-overlapping chunks of length 40. By comparing the performance of these methods in the two settings, we can illustrate the importance and difficulty of incorporating lifelong sequences. We note that except GRU4Rec, HPMN, MIMN and our proposed LimaRec, the models trained using lifelong sequences are not practical in the online inference setting, due to the huge computational cost.

We implement our method and all the baselines with PyTorch. We train all models using the Adam optimizer with a learning rate of 0.001 and a batch size of 128. All methods are trained for a maximum of 500 epochs and the embedding dimension is set to 32. For our model, we use a regularization coefficient

of 0.01 and a dropout ratio of 0.1. The number of interests on each dataset is tuned from . The hyperparameters of the baseline methods are also tuned to the best on each dataset.

Following (Cen et al., 2020; Kang and McAuley, 2018), we evaluate the performance with two widely used metrics of ranking evaluation: Hit Rate (HR) and NDCG. HR@ counts the proportion of times that the ground-truth target item is among the top-, while NDCG@ assigns larger weights on higher positions. We report the two metrics at and . The last item of each user’s behavior sequence is used for evaluating, and the remaining behaviors are used for training. We randomly generate 100 negative samples, pair them with the ground-truth target item for the compared methods to rank, as in (Kang and McAuley, 2018).

Note that LimaRec, HPMN and the multi-interest baselines yield multiple representation vectors for each user. We find the one who has the largest inner product score with the target item : , and use the universal to compute the scores for all the left negative items for more efficiency. The exact evaluation setting would be to, for each negative item, compute the inner product using all vectors independently, and use the maximum one as the score. That is, we should use as the score for negative item , instead of using , where .

Figure 1. Recommendation performance on users with increasingly longer behavior sequences (MovieLens).
Figure 2. Recommendation performance on users with increasingly longer behavior sequences (XLong and Industrial).

4.4. Overall Performance Comparison

The results are summarized in Table 3 and Table 4. From the two tables, we have the following findings:

  • [leftmargin=*]

  • Finding 1 - Incorporating lifelong sequences can boost the recommendation performance. By comparing the performance of SASRec-40 with that of SASRec-1000, we find that training SASRec with lifelong sequences indeed improves the recommendation performance. Since the scaled dot-product self-attention can naturally identify relevant information in the user sequence with adaptive attention weights, SASRec can benefit from longer historical sequences without overwhelmed by noise. We also observe that HPMN outperforms the multi-interests models in most scenarios, this further illustrates the benefits brought by lifelong sequences.

  • Finding 2 - It is not a simple task to modeling lifelong sequences. We find that most multi-interests methods and GRU4Rec, which are not tailored for lifelong sequential modeling, however, decrease in performance in most cases when feeding lifelong sequences to them. These models fail to extract richer information, but instead get consumed by noise handling far longer sequences. ComiRec-SA performs better using lifelong sequences on two of the four datasets. This again demonstrates the merit of self-attention mechanism.

  • Finding 3 - LimaRec consistently outperforms both the state-of-the-art lifelong sequential modeling baselines and the multi-interest models on all four datasets. Compared with the lifelong sequential models and SASRec, this illustrates the superiority of our incremental attention mechanism over memory networks, and the effectiveness of our proposed multi-interest extraction module.

4.5. Performance regarding Sequence Length

4.5.1. Settings

In this section, we evaluate how does the performance of different methods varies as the length of users’ behavior sequences increases. Concretely, we extract the subset of users whose behavior sequences has length greater or equal a threshold , we compute the performance of different methods only on this subset of users. We then observe how does the performance vary when we increase the threshold . Note that when is increased, the subset contains fewer users. We present the results in Figure 1 and Figure 2. All the compared methods are trained using lifelong sequences. The shaded region in light blue shows the cumulative density of the user sequence length’s distribution.

Figure 3. The impact of the number of interests on the hit rate and diversity.

4.5.2. Findings

Surprisingly, we find that the recommendation performance varies differently on MovieLens and XLong/Industrial. From Figure 2, we see that the performance would increase on users with longer sequences. This indicates that we can generate better recommendation to active users with rich behavior history, and further corroborates the importance of integrating lifelong sequences in sequential recommendation. On ML-1M and ML-25M, however, we observe that the performance drops on longer sequences. The MovieLens datasets are collected over the course of years (~3 yrs for ML-1M and ~25 yrs for ML-25M). For a user with hundreds of actions, these actions may span a period of several years. On the other hand, the users with few actions may generate all of them within a short period, typically a day, and never log in again. The users with longer sequences are, in fact ”sparser” than the ones with few behaviors. Hence, they contain much noise and would have fuzzy behavior transition patterns, and it is difficult to model them. Nevertheless, using lifelong sequences still contributes to the recommendation performance, as we see in Table 3. Moreover, we see that on these two datasets, the performance of our proposed LimaRec decreases slower than the baseline methods when the sequence length increases, which indicates the superiority of LimaRec in modeling lifelong sequential behaviors.

4.6. Analysis of the Multi-Interest Extraction Module

4.6.1. Settings

Here we empirically investigate the effectiveness of our proposed incremental attention based multi-interest extraction module. We vary the number of interests modeled in LimaRec from 1 to 9, train the model, and measure the hit rate. To better understand the nature of users’ multi-facet interests, we also evaluate the recommendation diversity with respect to different . We employ the diversity metric Diversity@ in (Cen et al., 2020), which computes the fraction of heterogeneous item-pairs (having different categories) in the top-. The results are reported in Figure 3.

4.6.2. Findings

We observe that the hit rate using a single interest is significantly worse than utilizing multiple interests, as the lifelong sequences of users, by nature, contain diverse interests. When we set , we are essentially using a single interest vector to query the latent representation of the sequence (after going through the two incremental self-attention blocks and projecting it to a new latent space). This would only amplify the noise contained in lifelong sequences. We obtain considerable performance boosts (in terms of hit rate) with increasing . Hence it is quite essential to consider the multi-facet interests of users. Since it becomes easier for the model to recommend more diverse items with a larger , we also find that, in general, the recommendation diversity tends to increase as increases. Though there are still some fluctuations when varies, it may be caused by the inconsistency between the we employed and the true number of interests in the user’s historical sequence (which is unknown). One has to carefully tune to strike a balance between the traditional metrics like hit rate/NDCG, and the recommendation diversity.

5. Conclusions

In this work, we considered the problem of lifelong sequential recommendation, where the user representation has to be continuously updated in constant time to satisfy the strict latency and storage constraints for online systems, while still be able to preserve the behavior patterns in the historical sequence. Differing from the lifelong sequential modeling methods based on RNN memory networks, we propose a novel model built upon incremental self-attention. The self-attention mechanism enables us to identify and retrieve relevant information from the user’s lifelong sequence, without succumbing to massive noise contained in it. We further propose a multi-interest extraction module to soft-search the lifelong sequence for the behaviors relevant to each interest. Extensive experiments have demonstrated the superiority of our method.

References

  • I. Bello, B. Zoph, A. Vaswani, J. Shlens, and Q. V. Le (2019) Attention augmented convolutional networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Cited by: §1.
  • Y. Bengio, P. Simard, and P. Frasconi (1994) Learning long-term dependencies with gradient descent is difficult.

    IEEE transactions on neural networks

    5 (2).
    Cited by: §1.
  • Y. Cen, J. Zhang, X. Zou, C. Zhou, H. Yang, and J. Tang (2020) Controllable multi-interest framework for recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Cited by: §1, §2.3, §3.1.1, 2nd item, §4.3, §4.6.1.
  • X. Chen, H. Xu, Y. Zhang, J. Tang, Y. Cao, Z. Qin, and H. Zha (2018) Sequential recommendation with user memory networks. In Proceedings of the eleventh ACM international conference on web search and data mining, pp. 108–116. Cited by: §2.1.
  • K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Cited by: §1.
  • K. Choromanski, V. Likhosherstov, D. Dohan, X. Song, A. Gane, T. Sarlos, P. Hawkins, J. Davis, A. Mohiuddin, L. Kaiser, et al. (2020) Rethinking attention with performers. arXiv preprint arXiv:2009.14794. Cited by: §A.2, §1, §3.2.
  • P. Covington, J. Adams, and E. Sargin (2016) Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM conference on recommender systems, Cited by: §1, §3.1.1.
  • Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. Le, and R. Salakhutdinov (2019)

    Transformer-xl: attentive language models beyond a fixed-length context

    .
    In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Cited by: §1.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §2.1.
  • A. Graves, G. Wayne, and I. Danihelka (2014) Neural turing machines. arXiv preprint arXiv:1410.5401. Cited by: §1, §2.2, 1st item.
  • F. M. Harper and J. A. Konstan (2015) The movielens datasets: history and context. Acm transactions on interactive intelligent systems (tiis) 5 (4), pp. 1–19. Cited by: 1st item.
  • R. He, W. Kang, and J. McAuley (2017) Translation-based recommendation. In Proceedings of the eleventh ACM conference on recommender systems, pp. 161–169. Cited by: §2.1.
  • R. He and J. McAuley (2016) Fusing similarity models with markov chains for sparse sequential recommendation. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 191–200. Cited by: §2.1.
  • B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk (2015) Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Cited by: §2.1, 1st item.
  • B. Hidasi and A. Karatzoglou (2018) Recurrent neural networks with top-k gains for session-based recommendations. In Proceedings of the 27th ACM international conference on information and knowledge management, pp. 843–852. Cited by: §2.1.
  • W. Kang and J. McAuley (2018) Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM), Cited by: §A.2, §1, §1, §1, §2.1, §3.1.2, §3.2, §3.4, 2nd item, §4.3.
  • A. Katharopoulos, A. Vyas, N. Pappas, and F. Fleuret (2020) Transformers are rnns: fast autoregressive transformers with linear attention. In

    International Conference on Machine Learning

    ,
    pp. 5156–5165. Cited by: §1.
  • C. Li, Z. Liu, M. Wu, Y. Xu, H. Zhao, P. Huang, G. Kang, Q. Chen, W. Li, and D. L. Lee (2019) Multi-interest network with dynamic routing for recommendation at tmall. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Cited by: §1, §2.3, §3.1.1, 1st item.
  • J. Li, P. Ren, Z. Chen, Z. Ren, T. Lian, and J. Ma (2017) Neural attentive session-based recommendation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1419–1428. Cited by: §2.1.
  • D. Lian, Y. Wu, Y. Ge, X. Xie, and E. Chen (2020) Geography-aware sequential location recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Cited by: §1.
  • V. Likhosherstov, K. Choromanski, J. Davis, X. Song, and A. Weller (2020) Sub-linear memory: how to make performers slim. arXiv preprint arXiv:2012.11346. Cited by: §A.2.
  • Q. Liu, S. Wu, D. Wang, Z. Li, and L. Wang (2016) Context-aware sequential recommendation. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 1053–1058. Cited by: §2.1.
  • N. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, N. Shazeer, A. Ku, and D. Tran (2018) Image transformer. In International Conference on Machine Learning, pp. 4055–4064. Cited by: §1.
  • Q. Pi, W. Bian, G. Zhou, X. Zhu, and K. Gai (2019) Practice on long sequential user behavior modeling for click-through rate prediction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Cited by: §1, §1, §2.2, §3.1.1, §3.2, 1st item.
  • Q. Pi, G. Zhou, Y. Zhang, Z. Wang, L. Ren, Y. Fan, X. Zhu, and K. Gai (2020) Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Cited by: §1, §1, §2.2.
  • J. Qin, W. Zhang, X. Wu, J. Jin, Y. Fang, and Y. Yu (2020) User behavior retrieval for click-through rate prediction. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Cited by: §1, §2.2.
  • K. Ren, J. Qin, Y. Fang, W. Zhang, L. Zheng, W. Bian, G. Zhou, J. Xu, Y. Yu, X. Zhu, et al. (2019) Lifelong sequential modeling with personalized memorization for user response prediction. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Cited by: §1, §1, §1, §1, §2.2, §3.1.1, §3.2, 2nd item, 2nd item, §4.3.
  • S. Rendle, C. Freudenthaler, and L. Schmidt-Thieme (2010) Factorizing personalized markov chains for next-basket recommendation. In Proceedings of the 19th international conference on World wide web, pp. 811–820. Cited by: §2.1.
  • S. Sabour, N. Frosst, and G. E. Hinton (2017) Dynamic routing between capsules. In Advances in Neural Information Processing Systems, Vol. 30. Cited by: §2.3, 1st item, 2nd item.
  • F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang (2019) BERT4Rec: sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management, pp. 1441–1450. Cited by: §1, §2.1.
  • J. Tang and K. Wang (2018) Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 565–573. Cited by: §2.1.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Cited by: §1, §1, §2.1, §2.2, §3.1.2.
  • X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In

    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    Cited by: §1.
  • Z. Xiao, L. Yang, W. Jiang, Y. Wei, Y. Hu, and H. Wang (2020) Deep multi-interest network for click-through rate prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 2265–2268. Cited by: §1, §2.3.
  • Y. Xu, M. Li, L. Cui, S. Huang, F. Wei, and M. Zhou (2020) Layoutlm: pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1192–1200. Cited by: §1.
  • F. Yu, Q. Liu, S. Wu, L. Wang, and T. Tan (2016) A dynamic recurrent model for next basket recommendation. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 729–732. Cited by: §2.1.
  • S. Zhang, Y. Tay, L. Yao, and A. Sun (2018) Next item recommendation with self-attention. arXiv preprint arXiv:1808.06414. Cited by: §1, §2.1.
  • G. Zhou, N. Mou, Y. Fan, Q. Pi, W. Bian, C. Zhou, X. Zhu, and K. Gai (2019) Deep interest evolution network for click-through rate prediction. In

    Proceedings of the AAAI conference on artificial intelligence

    ,
    Vol. 33, pp. 5941–5948. Cited by: §2.3.
  • G. Zhou, X. Zhu, C. Song, Y. Fan, H. Zhu, X. Ma, Y. Yan, J. Jin, H. Li, and K. Gai (2018) Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1059–1068. Cited by: §1, §1, §2.3.

Appendix A Appendix

a.1. Dataset Pre-processing

For MovieLens datasets we treat the presence of a movie rating as implicit feedback (i.e., we consider it as an action in the user’s behavior history). For all datasets, we drop users and items with less than 5 related actions. For the Industrial dataset, 1% users are randomly sampled from the ones who are active on February 1, 2021, and their click histories are collected. The full user-item interaction data contains 71,086,374 items, and 20,059,521,441 actions from 42,995,680

users. The negative items used in evaluation for each user are randomly sampled from the set of items that the user has not interacted with using a uniform distribution. We plan to open this dataset to further nourish the community development in this field.

a.2. Parameter Settings

As mentioned above, we implement our method and all the baselines with PyTorch. We use an efficient implementation of the linear attention mechanism (Choromanski et al., 2020) provided by (Likhosherstov et al., 2020) for our LimaRec. All models are trained with an embedding dimension of 32, and a dropout rate of 0.1. We set the maximum number of training epochs to 500 on the ML-1M dataset, 200 on the ML-25M and XLong datasets, and 20 for Industrial. The number of self-attention blocks are set to 2 for SASRec, and we employ a single attention head, as it is pointed out in (Kang and McAuley, 2018) that using multiple attention heads does not contribute to the recommendation performance. The regulation coefficient is set to 0.01 for HPMN and MIMN. We tune the number of interests used in ComiRec and MIND, as well as the number of hierarchical GRU layers in HPMN from 1 to 9. The number of memory slots of MIMN are set to 3.

a.3. Experiment Environment

We implement all models with PyTorch, and conduct experiments on a server with NVIDIA V100 GPUs. Regarding software versions, Python 3.6.9 and PyTorch 1.8.0 are used.

a.4. Ablation Study of Multi-Interest Extraction Module

a.4.1. Settings

In this section, we conduct an ablation analysis to further study the effectiveness of the multi-interest extraction module, and to compare the performance of incremental attention with vanilla self-attention. Hence we evaluate the recommendation performance of LimaRec without the multi-interest extraction layer, by only using the two incremental self-attention blocks. The results are presented in Table 5.

HR@5 NDCG@5 HR@10 NDCG@10
ML-1M 0.6548 0.5003 0.7803 0.5410
ML-25M 0.9172 0.7735 0.9686 0.7904
XLong 0.6648 0.5058 0.7917 0.5470
Industrial 0.7518 0.6147 0.8463 0.6454
Table 5. Performance of LimaRec without the Multi-Interest Extraction Module.

a.4.2. Findings

We find that LimaRec performs notably worse without the multi-interest extraction module, which further indicates the necessity to model and capture the multi-facet interests in users’ lifelong sequences. Compare the results with SASRec-1000 in Table 3, we observe that the performance of LimaRec without multi-interest modeling (note that in this case, LimaRec only utilizes the two incremental self-attention blocks) is only slightly worse than SASRec using lifelong sequences. This proves that the incremental attention mechanism indeed share the same power of vanilla to capture long-term dependencies in lifelong sequences, identify relevant information and filter out noise.