1 Introduction
In many ecommerce online scenarios, user profiles usually cannot be obtained so that sessionbased recommendation has become an important solution to address anonymous recommendation. Sessionbased recommender (SBR) system learns users’ preferences by mining sequential patterns from users’ chronological historical behavior without user profiles to predict users’ future interests in one session. Most traditional Markov chain based SBR models, e.g., FPMC
[33], FOSSIL [13], conduct sequence modeling and prediction only by considering user’s last behavior. Lately, RNNbased models the treat historical behaviours of each user as a strictlyorder, temporally dependent sequence like linguistic sentences. These methods like GRU4REC [15] surprisingly increase the performance in many real scenarios because of their effectiveness in storing shortterm information. However, they assume that the adjacent items in a session have a fixed sequential dependence which is unable to capture the user interest variations, as a result, they are prone to introduce wrong dependencies. To address this issue, later models that fusing selfattention mechanisms like SASREC, were proposed. Guo et al. [48] further improve the attentionbased approaches by introducing specialized human sentiment factors. The aforementioned attentionbased methods prefer to model unidirectional message transformation between adjacent items in a sequence. Such transformation may lose the insight of the relevant information of the whole sequence. For example, in a music player APP, although a user may randomly play an album or a certain type of music, which will generate different playback records, it does not mean that the user’s interest has changed. In other words, strictly modeling the user’s local click records and ignoring the global relationship may lead to overfitting. To resolve the problem in attentionbased methods, GNNbased models like SRGNN [43] and GCSAN [45] utilize graphs to capture the coherence of items within a session due to their powerful ability to represent structured data and adopt attention layers to learn longterm dependence. Despite the leading performance of GNNbased models compared with traditional SBR methods, great challenges remain.Challenge1: The user’s interests are extensive and hierarchical, which can be expressed as a powerlaw distribution of items clicked by users. The existing sessionbased recommendation methods learn the representations in Euclidean space, which can’t effectively capture the information of such hierarchical, or in other words, treestructured data.
Challenge2:
Recent studies have proved that hierarchical data can be better explained under NonEuclidean geometry of lowdimensional manifolds. But in the GNNbased methods, introducing NonEuclidean transformation will result in the discrepancy between Euclidean and NonEuclidean space when aggregating neighbouring messages and applying attention mechanism.
These challenges remain prevalent in realworld recommendation scenarios since it has been demonstrated that users’ behaviors like clicking or purchasing have prototypical characteristics of complex structures, which are generally powerlaw distributed [32, 19, 29]. Aforementioned, data with hierarchical structure could be well represented in hyperbolic space. This has motivated representation learning in hyperbolic space to effectively capture the information of the user behaviors with hierarchical property [35]. Furthermore, the hyperbolic representations can naturally capture the similarity and hierarchy by their distance. To illustrate the difference between Euclidean and hyperbolic space, we visualized them in Figure 2 and Figure 2. In a twodimensional Euclidean space as showed in Figure 2, the number of nodes increases polynomial in the center to the radius. By contrast, in twodimensional hyperbolic space in Figure 2, the number of nodes increases exponentially in the center to the given radius, and the hyperbolic space has more powerful (exponentiallevel) representation ability than then Euclidean space[1]. In conclusion, given the same radius, hyperbolic space has larger space, thereby including more nodes. Therefore, the general representational capacity of Euclidean space can be summarized as a square level, which can cause high distortion if we model the data of hierarchical relational users’ preferences.
To overcome these sessionbased challenges above, in this paper we propose a novel graph neural network framework, namely Hyperbolic Contrastive Graph Recommender (HCGR), upon hyperbolic space, specifically, Lorentz hyperbolic space for its simplicity and stability, to optimize the underlying hierarchical embeddings. First, we embed items with dense and effective representations, which are predisposed to preserving their internal hierarchical properties in Lorentz hyperbolic space. To ensure the correctness of the necessary representations’ transformation, we utilize some specific operations based on the Lorentz hyperboloid model. Second, we construct an improved graph neural network framework and a novel message propagation mechanism to model the preferences in user behavior sequences. To enable the model better distinguish users’ preferences for different items, we propose an adaptive hyperbolic attention calculation method. Third, we introduce contrastive learning to optimize the model by considering the distance between positive and negative samples in hyperbolic space.
Overall, we summarize our main contributions of this work as follows:

We exploit hyperbolic item representation for sessionbased recommendation, to the best of our knowledge, our method is the first one to extract hierarchical information of user behaviors within a session under hyperbolic space for SBR tasks.

We design a novel attention calculation approach in hyperbolic space to deal with graph information aggregating, which can’t be effectively implemented by the existing aggregation methods in Euclidean space.

We introduce contrastive learning to optimize the item representation by considering the geodesic distance between positive and negative samples in Lorentz hyperbolic space.

We conduct extensive experiments on three public datasets and one financial service industrial dataset, which show that our Lorentz hyperbolic sessionbased recommendation framework can achieve better performance compared to the stateoftheart SBR methods in terms of , and , where .
The structure of this paper is as follows. Section 2 presents the related work, including sessionbased recommendation and hyperbolic learning. Section 3 introduces the preliminary work of this paper, including the definition of the problem and some basic knowledge of the Lorentz hyperbolic space. Section 4 introduces the implementation details of HCGR framework, and section 5 gives the experimental results. Finally, conclusions are set out in section 6.
2 Related work
In section 2.1
we review a line of representative works on sessionbased recommendation, including traditional MCs(markov chains) based models, RNNs(recurrent neural networks) based models, attentionbased models, and GNNs(graph neural networks) based models. Then, in section
2.2 we discuss related hyperbolic representation learning methods related to our proposed HCGR.2.1 Sessionbased Recommendation
2.1.1 Markov chain models
Early sequential recommendation methods mainly rely on Markov chains. For example, FPMC [33] combines MF(matrix factorization) and MC to the learn general preference and local interest of users for the next basket recommendation. HRM [40] applies nonlinear operations to extract more complex pattern of both user’s sequential behavior and interests. Fossil [13]
fuses similaritybased methods with Markov chains to conduct personalized sequential recommendations. A shortcoming of MCbased models is that it is difficult to learn longterm dependencies because MCbased models assume that the next state is only related to the previously nearest state. Although some highorder Markov models can associate the next state with several previous states, they consume high computational cost
[23, 42].2.1.2 Recurrent neural networks models
In recent years, researchers adopt RNNs to capture time dependency in temporal data. The first RNNbased sequential recommendation method is GRU4REC [15]
, which uses GRU(gated recurrent unit) to capture longterm dependencies among sessions. Leveraging a novel pairwise ranking loss, GRU4Rec
[15] significantly outperforms MCbased approaches. Inspired by GRU4REC [15], MVRNN [8] incorporates visual and textual information to alleviate the item cold start problem. Furthermore, ROM [49] utilizes an interactive selfattention mechanism to adaptively reorganize the entity memory and the topic memory for the rating prediction task. However, RNNbased methods assume that the adjacent items in a session have a fixed sequential dependence, which may generate wrong dependencies and introduce noises in realworld scenarios like music playing.2.1.3 Attention mechanism
Recent models with attention mechanism [37] perform particularly well in sequential recommendation. Li et al. explores a hybrid encoder with attention mechanism to model users’ sequential behaviors and user’s interests in the same session [21]. A shortterm attention priority model STAMP [23] is proposed, which can capture both personal interest from the longterm memory of the session context, and user’s current interest from the shortterm behaviors. SASREC [17] effectively captures users’ longterm preferences from the sparse and dense datasets, and FDSA [47] puts the features of behaviours and items into two distinct independent blocks of selfattention to model the transition patterns of the items and the transition patterns of the items and achieve remarkable effects.
2.1.4 Graph neural networks
Most advanced sequential recommendation models apply the selfattention mechanism to capture user behavior relations in a long sequence. However, it is a challenge to find out both implicit and explicit relation between adjacent behaviors. GNNs can find out such relations effectively [16, 42] and can capture complex interaction of user behaviors. For example, SRGNN [43] instructively constructs a digraph for each sequence and CSSAN [45] further incorporates selfattention mechanism to generate the representation of the constructed digraph. In addition, Wu et al. focuses on users in the session and models their historical sequence by dotattention mechanism [44]. FGNN [30] proposes a weighted attention layer and a readout function to learn item embeddings and session embeddings for next item recommendation. Recently, Ma et al. utilizes memory models to capture both the long term and shortterm user behaviors [24]. Wang et al. introduces a new GNNbased method that can learn global relations between items [41]. Chen et al. [6] utilizes several ways to reduce the information within message propagation. GNNbased methods have yielded many fruitful results, but the existing methods generally model user behavior in tangent space, and the representation learned in tangent space is limited to capturing attributes of shallow properties and lack of hierarchy. In this work, we aim to learn hierarchical item representation in Lorentz hyperbolic space, and we want to find deep user behaviors patterns in sessionbased recommendation.
2.2 Hyperbolic Learning
Recently, many studies have shown that complex data may exhibit a highly nonEuclidean structure [2, 9]. Researchers are increasingly considering building complex neural networks on Riemannian space, in which the hyperbolic space with negative constant curvature is an attractive option [46, 26]. In many domains, such as sentences in natural language [36], social networks [46], biological protein graph [26], etc., data usually have a treelike structure or can be represented hierarchically, and hyperbolic space is equipped to model hierarchical structures naturally and effectively [27]. Due to its strong representation ability, hyperbolic space has been applied in many application areas [22, 4, 5, 7, 10]. For instance, Liu et al. [22] proposed Hyperbolic Graph Convolutional Networks used for graph representation learning by combining the expressiveness of hyperbolic space and Graph Convolutional Networks. Chen et al. [5] proposed a hyperbolic interaction model for multilabel classification tasks. These works have shown the advantages and effectiveness of hyperbolic space in learning hierarchical structures of complex relational data.
Noticing the potential of hyperbolic space in learning complex interactions between users and items, many researchers have tried to apply hyperbolic learning to recommendation systems [3, 34, 25, 20, 11]. Chamberlain et al. [3] proposed a largescale recommender system based on the hyperbolic space, which can be scaled to millions of users. [34] considered constructing multiple hyperbolic manifolds to map the representation of user and ad, and proposed a framework that can effectively learn the hierarchical structure in users and ads based on the hyperbolic space. Ma et al. [25] proposed a recommendation model in the hyperbolic space for TopK recommendation. Li et al. [20] presented the Hyperbolic Social Recommender which utilized hyperbolic geometry to boost the performance. Wang et al. [39] proposed a novel graph neural network framework (HyperSoRec) combing hyperbolic learning for the social recommendation.
3 Preliminaries
In this section, we introduce basic knowledges about hyperbolic geometry and graph neural networks works related to our proposed HCGR.
3.1 Hyperbolic Geometry
Hyperbolic space is a Riemannian surface with negative curvature. Several hyperbolic geometric models have been widely used, including Poincare disk model [11, 36], Klein model [12] and Lorentz (hyperboloid) model [28]. All these hyperbolic models are isometrically equivalent, i.e., any point in one of these models can be transformed to another point with distancepreserving transformations [31]. In this paper, we choose the Lorentz model as the framework cornerstone, because of the numerical stability and calculation simplicity of its exponential/logarithm maps and its distance function. In hyperbolic geometry, we use the Lorentz formulation to model the network, which is found to be more stable for numeric optimization patterns [28]. We want to learn dimensional user and item embeddings.
A dimensional hyperbolic space is a Riemannian manifold with a constant negative curvature, which is denoted by . The negative reciprocal of the curvature is denoted by where . The Lorentz representation is defined by the pair and
(3.1) 
where is the Lorentz inner product given by
(3.2) 
and the metric matrix is given by
The distance function induced by the metric is
(3.3) 
For any pair of points , the tangent space at point is a dimensional Euclidean space. The elements of
are referred to as tangent vectors and satisfying
(3.4) 
3.2 Graph Neural Network
GNNs are neural networks that can handle graphstructured data directly. They are often applied in classification, link prediction and graph classification tasks. In this paper, we focus on graph classification, because we formulize each user’s behavior to a graph and we want to learn a representation from it rather than a single node.
Let denotes a given graph, where and are the set of the nodes and edges respectively, where represents the feature vector of , which is the initial embedding of node
. To specific, we formulate the graph classification task as follow. Our work is to learn a classifier
and the graphlevel representation to predict the label of the graph. Given a collection of graphs and the corresponding labels .GNNs use the structure of graph and the original feature of each node to learn its corresponding representation. The learning process is to take a node as the center, and iteratively aggregate the neighborhood information along edges. The information aggregation and update process can be formulated as follows:
(3.5) 
(3.6) 
where represents the embedding of node after th layer aggregator and is neighborhood of node . The information aggregation function aggregates the information from the neighborhood information and passes it to the target . The update function calculates the new node statues from the source embedding and the aggregated information .
After steps of information aggregation, the final embedding gather the hop neighborhood and the structure information. For the graph classification task, readout function generates a graph level embedding by gathering the embeddings of all node in the final layers:
(3.7) 
4 Methodolodgy
In this section, we describe the implementation details of the HCGR framework. First, in section 4.1 we illustrate the notations used in this paper and define the sessionbased recommendation task. In section 4.2, we transform the user behaviors within a session into a session graph, and present the HCGR’s overall pipeline in Figure 3. In section 4.3, we introduce the embeddings in Lorentz hyperbolic space. Next, we describe the novel attention mechanism that is especially designed for hyperbolic geometry in section 4.4. After learning the embeddings, we set up the hyperbolic attention mechanism to construct representation of user behaviors (section 4.5).Finally, we describe the contrastive learning with hyperbolic space distance (section 4.6).
4.1 Notation and Problem Definition
A sessionbased recommendation task is constructed on historical user behavior sessions, and makes predictions based on current user sessions. In this task, there is an item set , where is the number of items and all items are unique. Each session is composed of a series of user’s interactions, where represents an item clicked at the th position in and represents session’s length. Our purpose of sessionbased recommendation is to predict the item that the user is most likely to click on next time in a given session .
For each given session in the training process, there is a label as the target. In the training process, for each item in given session, our model learns the corresponding embedding vector , where is the dimension of vector
. Our model outputs a probability distribution
over the given session , where the item with Top value will be regarded as the candidate for Top recommendation.Notation  Description 

a hyperbolic space of dimension  
Riemannian manifold  
a given session  
the curvature of hyperbolic space  
the negative reciprocal of the curvature  
the tangent space at point with dimension  
a item embedding in the Euclid space  
the layer of graph neural network  
a item embedding in the hyperbolic space 
4.2 Behaviors Graph
Because graph neural network can’t deal with session directly, the first thing we need to do is converting a given session to the session graph . According to the analysis of datasets, it is very common for users to click the same item multiple times within the session. Because the user’s behavior is chronological and the same item may be clicked multiple times, we choose the weighted directed graph to represent the changing process of the given session . All the sessions will be converted into session graphs. To show this process more clearly, we show the process of this session converter in Figure 4. We use to denote all weighted directed edges set. Its elements are composed of , where , is the item clicked at timestamp , respectively, denotes the weighted directed edge between and . Note that if the node does not have a selfloop, we will add a selfloop with weight to it. Each node represents the unique item in the session and the features are initialized in the Lorentz hyperbolic space which introduced in section 4.3.
4.3 Embeddings in Lorentz Hyperbolic Space
We use the representation from Lorentz hyperbolic space for item embedding. The is the reciprocal of curve , which treated here as a trainable parameter and initials empirically. Then we fix the origin
and use it as a reference point. The embeddings are initialized by sampling the Gaussian distribution on the Euclidean space
of the reference point o.We denote the mapping between hyperbolic spaces and tangent spaces as exponential map and logarithmic map, respectively. The exponential map is a map from subset of a tangent space to . The logarithmic map is the reverse map that maps back to the tangent space . For any and satisfying and , the exponential map and logarithmic map are defined as follows:
(4.8) 
(4.9) 
where is the Lorentzian norm.
4.4 Hyperbolic Graph Attention Network
Following the mapping layer, how to model session graph and mine user preferences is the key issue. Users typically click on several items they like, and these items have rich hierarchical structure. As a result, we propose a novel information aggregation with attention mechanism in hyperbolic space to capture the influence of different items on user preferences during the process of information propagation. Hyperbolic space can better represent item[3, 34], but we still face a technical challenge, the traditional hyperbolic model does not define the necessary vector operation process, such as vector addition and multiplication etc. Inspired by previous works [3, 34, 39], we formulate the multiplication and addition operation in hyperbolic space as follow:
(4.10) 
(4.11) 
where is the Parallel Transport: for two point on the Lorentz space , the parallel transport of a tangent vector to the tangent space is:
(4.12) 
Nonlinear activation with different curvatures is proposed as follow:
(4.13) 
where , is the hyperbolic curvature at layer , respectively. To specific, we project the embedding from the tangent space to via exponential map according to the Eq(4.8). Then we perform the addition and multiplication operation according to the above equation.
The crucial idea of traditional GNNs is to learn representations in the given graph by iteratively aggregating and capture multihop neighborhood structures and features. The process of information aggregation usually consists of two parts: feature transformation and nonlinear activation. Recent studies have shown that, compared to the simple average aggregation method, the gain with features transformation and nonlinear activation is rarely, and may even bring negative gain. In addition, these two operations may lead to significant over fitting of highly sparse user behavior[14]. Based on these studies, we remove the unnecessary feature transformation and nonlinear activation to accelerate the training, inference and reduce the complexity of our framework.
To make better use of the representation ability of Lorentz space, we redesigned a way of information aggregation in hyperbolic space. We refer to the ideas of GCN[18] and GAT[38] to calculate the attention weight between target node and neighbors respectively. The detailed calculation way is shown as follow:
(4.14) 
(4.15) 
where is matrices, and is a constant number. The learning process is to take a node as the center, and iteratively aggregate the neighbors information along edges. For each node , in hyperbolic mechanism, all attention coefficients of their neighbors can be calculated as Eq(4.14). In order to use these attention coefficients, a linear combination for the neighbors is used for updating the embeddings of the nodes.
To take full advantage of higherorder relationships, we need to stack multiple hyperbolic attention layers together.
(4.16) 
4.5 Hyperbolic Attention Mechanism
After we obtain the graph level representation , we want to utilize selfattention mechanism to better capture user’s preference. Selfattention is an important part of attention mechanism, it has yielded many fruitful results such as:[37],[17],[47]. The selfattention mechanism can calculate the global dependence between user behavior and capture the item transformation relations of the whole session sequences. The original selfattention mechanism does not define in hyperbolic space, so we extend the selfattention mechanism to hyperbolic space and we formalize the hyperbolic selfattention mechanism as follows:
(4.17) 
where , and are matrices. It will receive the query (), key (), and value (), and calculate the similarity between each element in the session through the scaled dotproduct, so as to characterize the user’s longterm preference, where d is the dimension of the input vector and is the scale factor, which is used to prevent the gradient vanishing problem caused by the large value after the dot product.Correspondingly, Elementwise FeedForward is also extended to hyperbolic space and it is given by:
(4.18) 
where and are matrices, and are
dimensional bias vectors. It takes full account of the interaction between the dimensions of various vectors through nonlinear activation function and linear transformation. A skip connection after the feed forward network, which makes the model reduce the loss of information and takes advantage of the lowlayer information. For simplicity, we define the entire hyperbolic selfattention mechanism above as:
(4.19) 
Recent studies have shown that, different layers of selfattention mechanism may capture different types of features, so it is necessary to increase the number of layers appropriately to enhance the model expression. The multilayer hyperbolic selfattention mechanism is define as:
(4.20) 
Finally, the hyperbolic selfattention mechanism output is . After th adaptive hyperbolic selfattention blocks, we obtain the longterm attentive session representation . The shortterm interest describes the current preferences of users. It is based on several items recently visited as the basis for prediction. The next behavior of users is often closely related to his recent interests. In order to better model the relationship in the whole session, we set up a gated mechanism to capture both longterm and shortterm preference.
(4.21) 
where denotes the embedding corresponding of the last item in the given session S.
Finally, after we get a unified preference representation , we make a recommendation score for each element in the item set .
(4.22) 
where is the recommendation probability of our framework for item . For the sessionbased recommendation task, we select the highest probabilities item from item set as final result according to .
4.6 Contrastive Learning
By projecting the item embedding into hyperbolic space, we empower the performance of our framework. In the recommendation scenario, there are many similar items, but users usually only choose their favorite items, so if we can let the model distinguish this subtle distinction, it may significantly improve the recommendation ranking performance of our framework. Inspired by the successful practice of contrastive learning, we introduce contrastive learning in an innovative way into the framework in order to increase the modeling of user behavior. Compared with simple contrastive learning, the operation of our framework is calculating in hyperbolic space, which will be somewhat more complicated. Specifically, we want to make the best use of the distance between items in hyperbolic space through contrastive learning, then the recommendation model perceives more subtle distinction and improve the ranking performance.
We formulate our objective into two parts, the first part
is crossentropy loss function, which has been widely used in recommender system. The second part is the contrastive ranking loss
with margin. The purpose of is to separate the positive and negative pairs up to a given margin. When the margin is reached, the pairs of items are considered to be properly segregated and with little loss. This enables the model keep focus on the pairs of items that are not near the margin and the margin separation is optimized in Lorentz hyperbolic space.(4.23) 
(4.24) 
(4.25) 
where and control the magnitude of the cross entropy loss and contrastive ranking loss respectively.
5 Experiment
In this section, detailed experiments will be conducted to assess the performance of the HCGR framework. We intend to answer following questions:

RQ1: How does our proposed method perform comparing with the stateoftheart methods?

RQ2: How the different components (i.e., Lorentz transportation, multihop graph aggregation and contrastive learning) affect the performance of HCGR?

RQ3: Can HCGR present reasonable explanation with regard to predicting user preference and get better recommendation results ?
In particular, we first describe the datasets and experimental configuration (section 5.1). Then we compare the effectiveness of HCGR with several comparison methods (section 5.2). In section 5.3, we analyze in detail the generalization capability and the possibility of migration of the HCGR. Lastly, section 5.4, we set the case study and visualize the embedding in hyperbolic space (section 5.5).
5.1 Experimental Setup
5.1.1 Datasets and Metrics
We evaluate different recommenders based on four publicly available datasets, three of which are public benchmark datasets, i.e., , , and . Specifically, ^{1}^{1}1https://recsys.acm.org/recsys15/challenge/ is the competition dataset of Recsys challenge 2015. It contains ecommerce website click within six months and related information. ^{2}^{2}2http://millionsongdataset.com/lastfm/ dataset contains a set of users from online music service, which describes tagging and the music listening details of user. The ^{3}^{3}3http://recsyswiki.com/wiki/Groceryshoppingdatasets dataset is a grocery dataset published by ACM RecSys, it covers goods ranging from food, office supplies to furniture. The fourth dataset is the financial service scenarios dataset , which is an industrial online recommendation platform in the Ant Group. dataset describes users’ interests and preferences in financial products such as debit, trust, accounting, which contains more than 5.6 million interactions from 691,701 users and brings more challenges compared with the three public datasets. The data statistical status after preprocessing is summarized in Table 2, where Avg.I/user and Avg.I/item denote ”average interaction per user” and ”average interaction per item”, respectively. To filter noisy data, we filter out items that appear less than 3 times, and then remove all user’s behaviors less than 3 items on four datasets. After preprocessing, we split user behaviors into three parts, i.e., we randomly pick 80 as training set, 10
as validation set for hyperparameter tuning, and the remaining part for evaluating the performance of the model. Furthermore, to prevent overfitting, we set the patience argument to be 10 in the early stopping mechanism which denotes how many epochs we want to wait after the last time the validation metrics improved before breaking the training loop.
Dataset  Users  Items  Avg.I/user  Avg.I/item  Behaviors 

  
To fairly compare the generalization performance of each model, we evaluate for each user on his/her performance in the test set by adopting three recognized metrics: , and . Here, we choose to show the different metrics for , and .

If one or more element of the label is shown in the prediction results , we call it a hit. The is calculated as follow:
(5.26) where , denotes the indicator function and is an empty set. A larger value of reflects the accuracy of the recommendation results.

Normalized Discounted Cumulative Gain is a ranking based metric, which focuses on the order of retrieval results and is calculated in the following way:
(5.27) where is a constant to denote the maximum value of given and denotes an indicator function.A large value reflects a higher the ranking position of the expected item.

Mean Reciprocal Rank(MRR) when the r item is not in the higher position, the reciprocal is set to 0. It is formally given by:
(5.28) where denotes the position of the item in . is a normalize ranking take into account the order of recommendation list . A large value reflects a higher ranking position of the expected item.
5.1.2 Comparison Methods
To demonstrate the performance of HCGR, we consider the following representative methods for performance comparisons:

FPMC [33]  a classical markovbased model, which considers the latest interaction.

FOSSIL [13] a classical markovbased model, which captures personalized dynamics.

GRU4Rec [15]  a representative RNNbased method for sessionbased recommendation, it stacks multiple GRU layers for sessionparallel minibatch training.

NARM [21]  a hybrid encoder with attention mechanism to model sequential behaviors in the current session.

SASRec [17]  a self attentionbased sequential recommender, which utilizes relatively few actions and considers longrange dependencies.

STAMP [23]  a short term behavior priority attentionbased method.

SRGNN [43]  a graphbased recommender modeling session to learn item representations.

GCSAN [45]  an improved version of SRGNN, which uses a GNN and multilayered selfattention mechanism to compute the sequence level embeddings.

FGNN [30]  a graphbased method, which uses a weight attention network to compute the graph level embeddings.

LESSR [6]  a sessionbased recommender with GNN, which innovatively utilizes auxiliary graph to generate item representation.

HCGR  our approach with novel attentive information aggregation, which utilize contrastive loss to optimize the model by considering the distance between positive and negative samples in hyperbolic space.
In this experiment, we set the maximum length of session to be , and the embedding dimension to be =128 for all datasets, the initial learning rate is uniformly set to 0.001 ,the linear schedule decay rate of 0.5 of every 3 epochs and penalty is
. All parameters are initialized by Gaussian distribution with mean value of 0 and standard deviation of 0.1. The model cooperates with the Adam optimizer to complete the training.
5.1.3 Data exploration
As discussed in section 2.2, complex data with treelike structure, i.e., the data obeys a powerlaw distribution, is effectively explained in the hyperbolic space. Therefore, we check the data distribution used in our experiment to verify the appropriateness of dataset selection. We present the distribution of number of interactions between users and items as illustrated in Figure 5. Powerlaw distribution is observed in three public datasets and the dataset. In the useritem interactions, a majority of users interact with items very few times, meanwhile, most items are with few clicks. Such results demonstrate the treelike structure of our dataset, which is supposed to have better representations in the hyperbolic space for sessionbased recommendation.
5.2 Overall Performance (RQ1)
Dataset  Metric  FPMC  FOSSIL  GRU4Rec  NARM  SASRec  STAMP  SRGNN  GCSAN  FGNN  LESSR  HCGR  Improv. 
H@10  0.0623  0.0639  0.0759  0.0749  0.0808  0.0735  0.0902  0.0939  0.0946  0.106  0.1071  1.04%  
M@10  0.03  0.0178  0.0288  0.0276  0.0342  0.0343  0.0411  0.0419  0.0414  0.0457  0.0523  14.44%  
N@10  0.0376  0.0286  0.0398  0.0386  0.0452  0.0411  0.0484  0.0542  0.0539  0.062  0.0651  5.00%  
H@20  0.0923  0.0951  0.1131  0.1153  0.1192  0.1177  0.1099  0.1208  0.1241  0.1275  0.1388  8.86%  
M@20  0.0321  0.02  0.0312  0.0303  0.037  0.0362  0.0431  0.0437  0.0434  0.0505  0.0541  7.13%  
N@20  0.0452  0.0365  0.0491  0.0488  0.0549  0.0378  0.0354  0.0611  0.0614  0.0699  0.0749  7.15%  
H@10  0.4093  0.4014  0.4524  0.4615  0.4317  0.3967  0.4341  0.4768  0.4642  0.4735  0.4798  0.61%  
M@10  0.1603  0.1471  0.2163  0.2207  0.1716  0.1915  0.2204  0.188  0.197  0.2241  0.2253  0.54%  
N@10  0.219  0.2072  0.2719  0.2773  0.2328  0.2401  0.2709  0.2558  0.2598  0.2828  0.2898  2.48%  
H@20  0.5013  0.4902  0.5544  0.5636  0.5391  0.4797  0.5279  0.5895  0.5687  0.5722  0.5938  0.73%  
M@20  0.1668  0.1533  0.2235  0.2278  0.1791  0.1973  0.227  0.1959  0.2044  0.231  0.2325  0.65%  
N@20  0.2424  0.2297  0.2978  0.3032  0.26  0.2611  0.2946  0.2844  0.2864  0.3078  0.3162  2.73%  
  H@10  0.0853  0.0995  0.1091  0.1028  0.1091  0.0861  0.094  0.1099  0.1056  0.1115  0.1134  1.70% 
M@10  0.04  0.0344  0.0456  0.0438  0.0447  0.0404  0.0435  0.0444  0.0396  0.0378  0.0487  28.84%  
N@10  0.0506  0.0497  0.0604  0.0576  0.0598  0.0511  0.0554  0.0587  0.0552  0.0533  0.0539  1.13%  
H@20  0.1149  0.1358  0.1509  0.1401  0.1494  0.1181  0.1262  0.1403  0.1424  0.1477  0.1507  2.03%  
M@20  0.042  0.0369  0.0485  0.0464  0.0475  0.0426  0.0458  0.0472  0.0422  0.0489  0.0512  4.70%  
N@20  0.058  0.0589  0.0709  0.067  0.07  0.0592  0.0635  0.0699  0.0644  0.0673  0.0733  8.92%  
H@10  0.5136  0.4521  0.5647  0.5459  0.5232  0.5542  0.5522  0.5505  0.5612  0.5562  0.5773  2.21%  
M@10  0.2899  0.2623  0.3255  0.3185  0.3016  0.3167  0.3173  0.3164  0.3299  0.3104  0.3373  2.24%  
N@10  0.3429  0.3073  0.3822  0.3724  0.354  0.373  0.373  0.3754  0.3835  0.3811  0.3901  1.72%  
H@20  0.6185  0.5438  0.6647  0.6453  0.6261  0.6603  0.6544  0.6581  0.6684  0.6616  0.6713  0.43%  
M@20  0.2972  0.2686  0.3324  0.3254  0.3087  0.3241  0.3245  0.3417  0.3445  0.3424  0.3578  3.86%  
N@20  0.3694  0.3304  0.4075  0.3976  0.3799  0.3998  0.3989  0.4025  0.4103  0.4026  0.4135  0.78%  
* Realtive improvemens are calculated by comparing with the second best performance 
The experimental results of all comparison methods in sessionbased recommendation are presented in Table 3. The best results of each column are highlighted in boldface. As can be observed, HCGR outperforms the best baselines with more than 4.5% performance improvement on average on all datasets. From the results in Table 3, we can draw the following main findings:

The RNNbased approaches which capture sequential dependency in a session(i.e., GRU4REC, NARM) remarkably outperform the traditional models that rely on Markov chains(i.e., FPMC, FOSSIL). This phenomenon proves that capturing sequential effects is a key factor for sessionbased recommendation as user’s sessionbased behaviors are usually included in a short period and are likely to be temporally dependent.

The attentionbased models(i.e., NARM, SASRec, and STAMP) that involve attention mechanism get higher performance compared with that do not(i.e., GRU4REC) in all evaluation metrics. This is because NARM, STAMP, and SASRec can extract the shift of user interest within sessions and get the main purpose in the current session by incorporating an attention mechanism, which captures personal interest from the longterm memory or just models the user’s current interest from the shortterm behaviors. This phenomenon indicates that RNNbased approaches with the assumption that adjacent items in a session have a fixed sequential dependence may generate wrong dependencies, which further results in recommendation bias. This could be alleviated by involving the attention mechanism.

The GNNbased models (i.e., GCSAN, FGNN) achieve better performance than RNNsbased models with or without attention mechanism due to the remarkable capacity of graph neural networks to capture complex interaction of user behaviors and describe the coherence of items in a session, which are ignored by RNNsbased models and such ignorance leads to overfitting in RNNsbased models.

Our proposed HCGR consistently outperforms all the comparison models on all datasets. Compared with FGNN and LESSR, our model involves an advanced hyperbolic learning component to more effectively capture the coherence and hierarchy representations of the user behaviors within the Lorentz hyperbolic space, which ensures the correctness of the necessary representations’ transformation. Furthermore, we use a novel graph message propagation mechanism with adaptive hyperbolic attention calculation to model user’s preferences in session behavior sequences. In addition, we introduce contrastive learning to optimize the model by considering the distance between positive and negative samples in hyperbolic space, which can help learn better item representations.
5.3 Ablation Study (RQ2)
5.3.1 Effect of Lorentz Transformation
To demonstrate the effectiveness of the proposed hyperbolic learning framework for sessionbased recommendation, we conduct the ablation experiments by combining the Lorentz transformation with several baseline Euclidean SBR models, including FPMC, GRU4Rec, SASRec, and SRGNN. Besides, we also compare the performance of HCGR with ECGR (Euclidean Contrastive Graph Representation) by removing the Lorentz transformation from the hyperbolic contrastive graph representation learning framework shown in Figure 3. The experimental results are shown in Table 4. The postfix means the corresponding model is combined with hyperbolic learning to extract the hierarchy information contained within the SBR datasets. From Table 4, we can draw the following conclusions:

The performance of all models improves significantly on the four datasets when combining the Lorentz transformation with the Euclidean SBR models, which demonstrates that the hierarchy information from the powerlaw like sessionbased recommendation data is essential for predicting the user behavior, while such information is just ignored by the traditional SBR models built upon Euclidean space. Furthermore, the improvement of Markovbased method(i.e., FPMC) and attentionbased method(i.e., SASRec) is more obvious than that of RNNbased(i.e., GRU4Rec) and GNNbased(i.e., SRGNN) method.

Our proposed hyperbolic contrastive graph representation learning method HCGR achieves the best results over all comparison models with or without Lorentz transformation, but the performance of ECGR drops evidently when replacing the Lorentz transformation with Euclidean transformation on all datasets. Besides, we find that the ECGR outperforms most baseline SBR models coupling with Lorentz transformation, which indicates the advantage of the proposed contrastive graph representation learning method.
Dataset  Metric  FPMC 

GRU4Rec 

SASRec 

SRGNN 

ECGR 



H@10  0.0623  0.0684  0.0759  0.0771  0.0808  0.0868  0.0902  0.0910  0.1063  0.1071  
N@10  0.0376  0.0395  0.0398  0.0400  0.0452  0.0469  0.0484  0.0495  0.0642  0.0651  
M@10  0.0300  0.0306  0.0288  0.0288  0.0342  0.0347  0.0411  0.0417  0.0483  0.0523  
H@20  0.1149  0.0837  0.1509  0.1526  0.1494  0.1518  0.1262  0.1349  0.1294  0.1388  
N@20  0.0452  0.0449  0.0491  0.0509  0.0549  0.0557  0.0354  0.0588  0.0718  0.0749  
M@20  0.0321  0.0320  0.0312  0.0317  0.0370  0.0371  0.0431  0.0425  0.0516  0.0541  
  H@10  0.0853  0.0863  0.1091  0.1101  0.1091  0.1110  0.0940  0.0966  0.1068  0.1134  
N@10  0.0506  0.0555  0.0604  0.0617  0.0598  0.0603  0.0554  0.0563  0.0579  0.0539  
M@10  0.0400  0.0392  0.0456  0.0470  0.0447  0.0448  0.0435  0.0440  0.043  0.0487  
H@20  0.5013  0.5110  0.5544  0.5577  0.5391  0.5618  0.5279  0.5442  0.1452  0.1507  
N@20  0.0580  0.0498  0.0709  0.0724  0.0700  0.0706  0.0635  0.0659  0.0676  0.0733  
M@20  0.0420  0.0403  0.0485  0.0499  0.0475  0.0477  0.0458  0.0466  0.0456  0.0512  
H@10  0.4093  0.4149  0.4524  0.4566  0.4317  0.4463  0.4341  0.4469  0.4651  0.4798  
N@10  0.2190  0.2243  0.2719  0.2743  0.2328  0.2359  0.2709  0.2675  0.2572  0.2898  
M@10  0.1603  0.1657  0.2163  0.2181  0.1716  0.1715  0.2204  0.2221  0.1933  0.2252  
H@20  0.6185  0.6410  0.6647  0.6685  0.6261  0.6236  0.6544  0.6649  0.5681  0.5938  
N@20  0.2424  0.2488  0.2978  0.2999  0.2600  0.2652  0.2946  0.2952  0.2834  0.3162  
M@20  0.1668  0.1724  0.2235  0.2252  0.1791  0.1796  0.2270  0.2253  0.2005  0.2325  
H@10  0.5136  0.5372  0.5647  0.5657  0.5232  0.5253  0.5522  0.5665  0.5607  0.5772  
N@10  0.3429  0.3552  0.3822  0.3844  0.3540  0.3617  0.3730  0.3760  0.3823  0.3901  
M@10  0.2899  0.2987  0.3255  0.3280  0.3016  0.3112  0.3173  0.3233  0.3114  0.3373  
H@20  0.6185  0.6410  0.6647  0.6685  0.6261  0.6236  0.6544  0.6649  0.6619  0.6713  
N@20  0.3694  0.3814  0.4075  0.4104  0.3799  0.3766  0.3989  0.4010  0.4079  0.4135  
M@20  0.2972  0.3059  0.3324  0.3352  0.3087  0.3181  0.3245  0.3302  0.3474  0.3578 
Method  

H@10  M@10  N@10  H@20  M@20  N@20  H@10  M@10  N@10  H@20  M@20  N@20  
HCGR_CE  0.11  0.0508  0.0645  0.1358  0.052  0.0725  0.4751  0.2182  0.2861  0.5827  0.2261  0.3144 
HCGR  0.1071  0.0523  0.0651  0.1388  0.0541  0.0749  0.4798  0.2252  0.2898  0.5938  0.2325  0.3162 
Improv  2.63%  2.92%  0.93%  2.20%  4.03%  3.32%  0.99%  3.21%  1.29%  1.90%  2.83%  0.57% 
Method    

H@10  M@10  N@10  H@20  M@20  N@20  H@10  M@10  N@10  H@20  M@20  N@20  
HCGR_CE  0.1154  0.0453  0.0523  0.1532  0.048  0.0716  0.5743  0.3321  0.3876  0.665  0.3528  0.4123 
HCGR  0.1134  0.0487  0.0539  0.1507  0.0512  0.0733  0.5722  0.3373  0.3901  0.6713  0.3578  0.4135 
Improv  1.70%  7.5%  3.06%  0.016%  6.67%  2.37%  0.50%  1.56%  0.64%  0.95%  1.47%  0.29% 
5.3.2 Effect of Graph Aggregation Method
To further investigate the advantage of the proposed adaptive hyperbolic graph aggregation method that utilizes multihop adjacent information, we conduct an ablation study by comparing different graph aggregation information approach within the framework of hyperbolic contrastive representation learning on four datasets. HCGR_GCN refers to a model that the traditional spectrumbased graph convolution method is used to transport messages among adjacent neighbors, while HCGR_GAT refers to a model that the graph attentionbased method is used to aggregate adjacent information. The experimental results are shown in Figure 6. we can draw the following conclusions:

The inductive attentionbased graph convolution model(i.e., HCGR_GAT) remarkably outperforms the transductive spectrumbased graph model(i.e., HCGR_GCN) on all datasets, which indicates that treating neighbours deferentially and flexibly is essential to filter noisy information during message aggregation.

No surprisingly, HCGR with our proposed multihop adjacent information aggregation method achieves the best performance. Comparing to GAT, the main improvement of our proposed aggregation method relying on multihop aggregated message during graph node representation optimization is fully used, which indicates that loworder and highorder mutual graph information are both critical for final prediction. Such loworder mutual information is just ignored by GATlike models.
5.3.3 Effects of Contrastive Ranking Loss
Diversity has become an important evaluation indices in recommendation scenario. In order to investigate the effectiveness of contrastive learning on the performance of our proposed hyperbolic graph representation learning framework, we conduct an experimental analysis by removing the contrastive ranking loss. To specific, HCGR_CE means that contrastive ranking loss is removed from Eq (4.25) while keeping other settings same as HCGR. The experimental resutls are shown in Table 5 and Table 6, we can draw the following observations:

In all datasets, the contrastive ranking loss optimization model HCGR outperforms the crossentropy loss optimization model(HCGR_CE) as regard to ranking evaluation metrics(, ), which indicates that the contrastive ranking loss can distinguish the subtle distinction between items within sessions and improve recommendation diversity.

With regard to the accuracy of recommendation results, there is no obvious difference between the performance of the two models, which indicates that our contrastive ranking loss can improve the ranking performances without losing recommendation accuracy.
5.3.4 Effects Of Embedding Size
We explore the impact of embedding size on several evaluation indices as such size significantly affect the representation ability. We conduct the experiment on and datasets as their generalization and representation. The results are plotted in Figure 78. We have the following observations:

HCGR outperforms all comparison SBR methods with most embedding sizes in all indices. Especially for small embedding size, such as 32, our model still can achieve better and robust results on these indices, which indicates introducing hyperbolic transformation can capture latent hierarchy property and boosts model performance.

It is also observed that a proper embedding size is essential for graph node representation. When the embedding size is too small, it can’t fully express node information and result in poor performance. On the opposite, a large embedding size may induce overfitting on the dataset.
5.4 Case Study (RQ3)
5.4.1 Representation Analysis
We set up the case study to explore whether our model can learn the hierarchical structure of user behavior. Whether the hierarchical structure in the data can be fully learned will affect the performance of the model, and this kind of hierarchical structure can be reflected by calculating the distance between the representation and the origin. HCGR, ECGR are calculated in two different geometries, we use gyrovector space distance and tangent distance respectively to calculate the distance from the target point to the origin. We set up three boundaries in Euclidean space and Lorentz space, and divide the representation into four regions according to their distance from the origin. For example, the item of region 1 is the closest to the origin, whereas the item of region 4 is the farthest from the origin. To intuitively reflect the different popularity of items in different regions, we count the interaction times of nodes in all regions of the four datasets. We visualize the statistics as Figure 9. From the results, it can be seen that the average number of interactions of items from region 1 to region 4 has decreased, which shows that both of our approaches ECGR and HCGR can model the hierarchical structure of session behavior. In addition, in all datasets, the average number of interactions of items with HCGR in region 1 is higher than that with ECGR, while the average number of interactions of items with ECGR in region 3 and 4 is higher than that with HCGR. Compared with ECGR, HCGR can better distinguish the items with different popularity and learn the hierarchical structure, which indicates that hyperbolic space is more suitable for embedding hierarchical data than Euclidean space for sessionbased recommendation tasks.
5.4.2 Attention Analysis
Taking advantage of the attention mechanism, we visualize the attention weight between user behaviors as shown in Figure 10, which reflects the different influences within the same session on the two models (HCGR, ECGR). We randomly select three different sessions of length 10(), 30() and 50() respectively from dataset(test set). For a same session in the heatmap, the above one is the attention weight between the related items and the next item user most likely to click modelled by ECGR, while the following one is the corresponding attention weight modelled by HCGR. From the heatmap, we discover that not all the behaviours in the same session equally contributing during the generation of the recommendation. In addition, we also find that the attention weight of HCGR for session behavior is higher than that of ECGR in many key positions. Specifically, HCGR will give higher scores with the increase of the scores given by ECGR, and HCGR can better distinguishes the item importance. It proves that hyperbolic space can better represent the hierarchical structure of data, thus making the attention mechanism capable of adaptively measuring the influence of session behavior.
5.5 Embedding Analysis and Visualization
We visualize the item embeddings in 2dimension and 3dimension on , ,  and datasets repectively according to Figure 11. (a)  (d). The popularity of the items is represented according to the different colors, decreasing with the color from red to green. Before training, we randomly initialize all the item embeddings. As shown in Figure 11, it is obvious that item embeddings present a hierarchical structure based on item popularity after training. On the dataset, we can observe such a clear hierarchical representation, with the most popular items in the center and unpopular ones stay away from the center of projection space. Similar results can also be obtained on other datasets.
6 Conclusion
The GNNbased model can not capture the hierarchical information effectively, which regularly appeared in recommendation scenarios. Enlightened by the powerful representation of nonEuclidean geometry which is proved to be able to reduce the distortion of embedding data onto powerlaw distribution, we proposed a hyperbolic contrastive graph recommender (HCGR), utilizing Lorentz hyperbolic space for item embeddings preserving their coherent and hierarchical properties. We design a novel hyperbolic graph message propagation mechanism due to the discrepancy between Euclidean and hyperbolic space during information passing. In addition, we introduce contrastive learning to enhance model performance by optimizing the distance between positive and negative samples of hyperbolic space, considering that distance in hyperbolic space can’t be expressed well by traditional loss, such as CE, BPR loss. For future work, we will extend our method to the sequential recommendation which involves user profile and more item features. Besides, we will learn a parsimonious representation of symbolic data by embedding the dataset into spherical or product space and optimize the process of matrix multiplication in nonEuclidean geometry.
References
 [1] (2020) Holography on tessellations of hyperbolic space. Physical Review D 102. Cited by: §1.

[2]
(2017)
Geometric deep learning: going beyond euclidean data
. IEEE Signal Processing Magazine 34 (4), pp. 18–42. External Links: Document Cited by: §2.2.  [3] (2019) Scalable hyperbolic recommender systems. arXiv preprint arXiv: 1902.08648. Cited by: §2.2, §4.4.

[4]
(2019)
Hyperbolic graph convolutional neural networks
. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 814, 2019, Vancouver, BC, Canada, pp. 4869–4880. Cited by: §2.2. 
[5]
(2020)
Hyperbolic interaction model for hierarchical multilabel classification.
In
The ThirtyFourth AAAI Conference on Artificial Intelligence, AAAI 2020, The ThirtySecond Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 712, 2020
, pp. 7496–7503. Cited by: §2.2.  [6] (2020) Handling information loss of graph neural networks for sessionbased recommendation. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1172–1180. Cited by: §2.1.4, 10th item.

[7]
(2021)
Selfsupervised hyperboloid representations from logical queries over knowledge graphs
. In WWW ’21: The Web Conference 2021, Virtual Event / Ljubljana, Slovenia, April 1923, 2021, pp. 1373–1384. Cited by: §2.2.  [8] (2018) MVrnn: a multiview recurrent neural network for sequential recommendation. IEEE Transactions on Knowledge and Data Engineering 32 (2), pp. 317–331. Cited by: §2.1.2.

[9]
(2010)
Noneuclidean dissimilarities: causes and informativeness.
In
Structural, Syntactic, and Statistical Pattern Recognition
, Berlin, Heidelberg, pp. 324–333. Cited by: §2.2.  [10] (2020) HME: A hyperbolic metric embedding approach for nextpoi recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 2530, 2020, pp. 1429–1438. External Links: Document Cited by: §2.2.
 [11] (2018) Hyperbolic neural networks. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 38, 2018, Montréal, Canada, pp. 5350–5360. Cited by: §2.2, §3.1.
 [12] (2019) Hyperbolic attention networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 69, 2019, Cited by: §3.1.
 [13] (2016) Fusing similarity models with markov chains for sparse sequential recommendation. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 191–200. Cited by: §1, §2.1.1, 2nd item.
 [14] (2020) Lightgcn: simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pp. 639–648. Cited by: §4.4.
 [15] (2015) Sessionbased recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Cited by: §1, §2.1.2, 3rd item.
 [16] (2018) Improving sequential recommendation with knowledgeenhanced memory networks. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 505–514. Cited by: §2.1.4.
 [17] (2018) Selfattentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM), pp. 197–206. Cited by: §2.1.3, §4.5, 5th item.
 [18] (2016) Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §4.4.
 [19] (2010) Hyperbolic geometry of complex networks. Physical Review E 82 (3), pp. 036106. Cited by: §1.
 [20] (2021) HSR: hyperbolic social recommender. arXiv preprint arXiv: 2102.09389 abs/2102.09389. Cited by: §2.2.
 [21] (2017) Neural attentive sessionbased recommendation. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1419–1428. Cited by: §2.1.3, 4th item.
 [22] (2019) Hyperbolic graph neural networks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 814, 2019, Vancouver, BC, Canada, pp. 8228–8239. Cited by: §2.2.
 [23] (2018) STAMP: shortterm attention/memory priority model for sessionbased recommendation. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1831–1839. Cited by: §2.1.1, §2.1.3, 6th item.
 [24] (2020) Memory augmented graph neural networks for sequential recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, pp. 5045–5052. Cited by: §2.1.4.
 [25] (2021) Knowledgeenhanced topk recommendation in poincaré ball. In ThirtyFifth AAAI Conference on Artificial Intelligence, AAAI 2021, ThirtyThird Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 29, 2021, pp. 4285–4293. Cited by: §2.2.
 [26] (2006) Rebuilding community ecology from functional traits. Trends in Ecology and Evolution 21 (4), pp. 178–185. External Links: Document Cited by: §2.2.
 [27] (2017) Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 49, 2017, Long Beach, CA, USA, pp. 6338–6347. Cited by: §2.2.

[28]
(2018)
Learning continuous hierarchies in the lorentz model of hyperbolic geometry.
In
Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 1015, 2018
, Vol. 80, pp. 3776–3785. Cited by: §3.1.  [29] (2012) Popularity versus similarity in growing networks. Nature 489 (7417), pp. 537–540. Cited by: §1.
 [30] (2019) Rethinking the item order in sessionbased recommendation with graph neural networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 579–588. Cited by: §2.1.4, 9th item.
 [31] (1995) Introduction to hyperbolic geometry. American Mathematical Monthly 103 (2), pp. 185. Cited by: §3.1.
 [32] (2003) Hierarchical organization in complex networks. Physical review E 67 (2), pp. 026112. Cited by: §1.
 [33] (2010) Factorizing personalized markov chains for nextbasket recommendation. In Proceedings of the 19th international conference on World wide web, pp. 811–820. Cited by: §1, §2.1.1, 1st item.
 [34] (2020) Multimanifold learning for largescale targeted advertising system. arXiv preprint arXiv: 2007.02334. Cited by: §2.2, §4.4.
 [35] (2021) HGCF: hyperbolic graph convolution networks for collaborative filtering. In Proceedings of the Web Conference 2021, pp. 593–601. Cited by: §1.
 [36] (2019) Poincare glove: hyperbolic word embeddings. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 69, 2019, Cited by: §2.2, §3.1.
 [37] (2017) Attention is all you need. arXiv preprint arXiv:1706.03762. Cited by: §2.1.3, §4.5.
 [38] (2017) Graph attention networks. arXiv preprint arXiv:1710.10903. Cited by: §4.4.
 [39] (2021) HyperSoRec: exploiting hyperbolic user and item representations with multiple aspects for socialaware recommendation. Cited by: §2.2, §4.4.
 [40] (2015) Learning hierarchical representation model for nextbasket recommendation. In Proceedings of the 38th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 403–412. Cited by: §2.1.1.
 [41] (2020) Beyond clicks: modeling multirelational item graph for sessionbased target behavior prediction. In Proceedings of The Web Conference 2020, pp. 3056–3062. Cited by: §2.1.4.
 [42] (2019) A neural influence diffusion model for social recommendation. In Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval, pp. 235–244. Cited by: §2.1.1, §2.1.4.
 [43] (2019) Sessionbased recommendation with graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 346–353. Cited by: §1, §2.1.4, 7th item.
 [44] (2019) Personalizing graph neural networks with attention mechanism for sessionbased recommendation. arXiv preprint arXiv:1910.08887. Cited by: §2.1.4.
 [45] (2019) Graph contextualized selfattention network for sessionbased recommendation.. In IJCAI, Vol. 19, pp. 3940–3946. Cited by: §1, §2.1.4, 8th item.
 [46] (2017) Social collaborative filtering by trust. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (8), pp. 1633–1647. External Links: Document Cited by: §2.2.
 [47] (2019) Featurelevel deeper selfattention network for sequential recommendation.. In IJCAI, pp. 4320–4326. Cited by: §2.1.3, §4.5.
 [48] (2020) Sentimentguided sequential recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1957–1960. Cited by: §1.
 [49] (2020) Memory reorganization: a symmetric memory network for reorganizing neighbors and topics to complete rating prediction. IEEE Access 8, pp. 81876–81886. Cited by: §2.1.2.