1. Introduction
Hyperbolic geometry has recently been identified as a powerful tool for neural network based representation learning
(Nickel and Kiela, 2017; Chamberlain et al., 2017). Hyperbolic space is negatively curved, making it better suited than flat Euclidean geometry for representing relationships between objects that are organized hierarchically (Nickel and Kiela, 2017, 2018) or that can be described by graphs taking the form of complex networks (Krioukov et al., 2010).Recommender systems are a pervasive technology, providing a major source of revenue and user satisfaction in customer facing digital businesses (GomezUribe and Hunt, 2015). They are particularly important in ecommerce, where due to large catalogue sizes, customers are often unaware of the full extent of available products (Cardoso et al., 2018). ASOS.com is a UK based online clothing retailer with 18.4m active customers and a live catalogue of products as of December 2018. Products are sold through multiple international websites and apps using a number of recommender systems. These include (1) personalised ’My Edit’ recommendations that appear in the app and in emails, (2) outfit completion recommendations (3) ’You Might Also Like’ recommendations that suggest alternative products, shown in Figure 1 and (4) out of stock alternative recommendations. All are implicitly Euclidean, neural network based recommenders, and we believe that each of them could be improved by the use of hyperbolic geometry.
Key to the success of a recommender system is the accurate representation of user preferences and item characteristics. Matrix factorization (Sarwar et al., 2000; Hu et al., 2008)
is one of most common approaches for this task. In its basic form, this technique represents useritem interactions in the form of a matrix. A factorization into two lowrank matrices, representing users and items respectively, is then computed so that it approximates the original interaction matrix. The result is a compact Euclidean vector representation for every user and every item that is useful for recommendation purposes, i.e. to estimate the interaction likelihood of unobserved pairs of users and items.
The useritem interaction matrix can be treated as the adjacency matrix of a random, undirected, bipartite graph, where edges exist between nodes representing users and items. In this paradigm user and item representations are learned by embedding nodes of the graph rather than minimising the matrix reconstruction error. The simplest type of random graph is the ErdősRenyí or completely random graph (Erdos and Renyi, 1960)
. In this model, an edge between any two nodes is generated independently with constant probability. If useritem interaction graphs were completely random, recommendation sytems based on them would be impossible. Instead, these graphs exhibit clustering produced by similar users interacting with similar items. In addition, power law degree distributions and small world effects are also common as the richgetricher effect of preferential attachment causes some users and items to have orders of magnitude more interactions than the median
(Barabási and Albert, 1999). Graphs of this form are known as complex networks (Newman, 2003).Completely random graphs are associated with an underlying Euclidean geometry, but the heterogenous topology of complex networks implies that the underlying geometry is hyperbolic (Krioukov et al., 2010). There are two reasons why embedding complex networks in hyperbolic geometry can be expected to perform better than Euclidean geometry. The first is that hyperbolic geometry is the continuous analogue of tree graphs (Gromov, 2007) and many complex networks display a coreperiphery hierarchy and treelike structures (Adcock et al., 2013). The second property is that powerlaw degree distributions appear naturally when points are randomly sampled in hyperbolic space and connected as an inverse function of their distance (Krioukov et al., 2010).
In our hyperbolic recommender system, the Euclidean vector representations of users and items are replaced with points in hyperbolic space. As hyperbolic space is curved, the standard optimisation tools do not work and the machinery of Riemannian gradient descent must be employed (Bonnabel, 2013). The additional complexity of Riemannian gradient descent is one of the major challenges to producing largescale hyperbolic recommender systems. We mitigate this through two major innovations: (1) we carefully choose a model of hyperbolic space that permits exact gradient descent and overcomes problems with numerical instability and (2) we do not explicitly represent customers, instead implicitly representing them though the hyperbolic Einstein midpoint of their interaction histories. By doing so, our hyperbolic recommender system is able to scale to the full ASOS dataset of 18.4 million customers and one million products^{1}^{1}187k products are live at any one time, but products are short lived and so recommendations are trained on a history containing roughly 1m products.
We make the following contributions:

We justify the use of hyperbolic representations for neural recommender systems through an analogy with complex networks

We demonstrate that hyperbolic recommender systems can significantly outperforms Euclidean equivalents by between 2 and 14%

We develop an asymmetric hyperbolic recommender system that scales to millions of users.
2. Related Work
The original work connecting hyperbolic space with complex networks was (Krioukov et al., 2010) and many scalefree networks such as the internet (Shavitt and Tankel, 2008; Boguna et al., 2010) or academic citations (Clough et al., 2015; Clough and Evans, 2016) have been shown to be well described by hyperbolic geometry. Hyperbolic graph embeddings were applied successfully to the problem of greedy message routing in (Kleinberg, 2007; Cvetkovski and Crovella, 2009) and general graphs in lowdimensional hyperbolic space were addressed by (Bläsius et al., 2016).
Hyperbolic geometry was introduced into the embedding layers of neural networks in (Nickel and Kiela, 2017; Chamberlain et al., 2017) who used the Poincaré ball model. (De Sa et al., 2018) analysed the tradeoffs in numerical precision and embedding size in these different approaches and (Ganea et al., 2018a) extended these models to include undirected graphs. (Nickel and Kiela, 2018) and (Wilson and Leimeister, 2018) showed that the Lorentzian (or hyperboloid) model of hyperbolic space can be used to write simpler and more efficient optimisers than the Poincaré ball. Several works have used shallow hyperbolic neural networks to model language (Dhingra et al., 2018; Leimeister and Wilson, 2018; Tifrea et al., 2019). Neural networks built on Cartesian products of isotropic spaces that include hyperbolic spaces have been developed in (Gu et al., 2019) and adaptive optimisers for such spaces appear in (Becigneul and Ganea, 2019).
3. Background
Hyperbolic geometry is an involved subject and comprehensive introductions appear in many textbooks (e.g. (Cannon et al., 1997)). In this section we include only the material necessary for the remainder of the paper. Hyperbolic space is a homogeneous, isotropic Riemann space. A Riemann manifold is a smooth differentiable manifold. Each point on the manifold is associated with a locally Euclidean tangent space . The manifold is equipped with a Riemann metric that specifies a smoothly varying positive definite inner product on at each point
. The shortest distance between points is not a straight line, but a geodesic curve with a distance defined by the metric tensor
. The map between the tangent space and the manifold is called the exponential map . As hyperbolic space can not be isometrically embedded in Euclidean space, five different models that sit inside a Euclidean ambient space are commonly used as representations.3.1. Models of Hyperbolic Space
There are multiple models of hyperbolic space because different approaches preserve some properties of the underlying space, but distort others. Each (dimensional) model has its own metric, geodesics and isometries and occupies a different subset of the ambient space . The models are all connected by simple projective maps and the most relevant for this work are the hyperboloid, and the Klein and Poincaré balls. As points in the models of hyperbolic space are not closed under multiplication and addition, they are not vectors in the mathematical sense. We denote them in bold font to indicate that they are one dimensional arrays.
3.1.1. Poincaré Ball Model
Much of the existing work on hyperbolic neural networks uses the Poincaré ball model (Nickel and Kiela, 2017; Chamberlain et al., 2017; Ganea et al., 2018a, b). It is conceptually the most simple model and our preferred choice for low dimensional visualisations of embeddings. However, gradient descent in the Poincaré ball is computationally complex. The Poincaré ball models the infinite dimensional hyperbolic space as the interior of the unit ball. The metric tensor is
(1) 
where is a generic point and is the kroneker delta. It is a function only of the Euclidean distance to the origin. Hyperbolic distances from the origin grow exponentially with Euclidean distance, reaching infinity at the boundary. As the metric is a pointbypoint scaling of the Euclidean metric, the model is conformal.
The hyperbolic distance between Euclidean points and is
(2) 
Gradient descent optimisation within the Poincaré ball is challenging because the ball is bounded. Strategies to manage this problem include moving points that escape the ball back inside by a small margin (Nickel and Kiela, 2017) or carefully managing numerical precision and other model parameters (De Sa et al., 2018).
3.1.2. The Klein Model
The Klein model affords the most computational efficient calculation of the Einstein midpoint, which is used to represent user vectors as the aggregate of the item vectors. The model consists of the set of points
(3) 
The projection of points between the hyperboloid model and the Klein model are given by
(4) 
while the inverse projection is
(5) 
3.1.3. The Hyperboloid Model
Unlike the Poincaré or Klein balls, the hyperboloid model is unbounded. We use the hyperboloid model as it offers efficient, closed form Riemannian Stochastic Gradient Descent (RSGD). The set of points form the upper sheet of an
dimensional hyperboloid embedded in an dimensional ambient Minkowski space equipped with the following metric tensor:(6) 
The inner product in Minkowski space resulting from the application of this metric tensor is
(7) 
The hyperboloid can be defined as the set of points
(8) 
where the hyperbolic distance between points and is defined as
(9) 
The tangent space to a point , is the set of points, satisfying
(10) 
The projection from ambient space to tangent space is defined as
(11) 
Finally, the exponential map from the tangent space to the hyperboloid is defined as
(12) 
where .
4. Why Hyperbolic Geometry?
There is an intimate connection between complex networks, hyperbolic geometry and recommender systems. In recommender systems, the underlying graph is a twomode, or bipartite, graph that connects users and items with an edge any time a user interacts with an item. Bipartite graphs can be projected into singlemode graphs as depicted in Figure 2 by using shared neighbour counts (or many other metrics) to represent the similarity between any pair of nodes of the same type. As such, bipartite graphs can be seen as the generative model for many complex networks (Guillaume and Latapy, 2006) e.g., the item similarity graph, is the onemode projection of the useritem bipartite graph onto the set of items. This connection is even more explicit if we consider that, on the one hand, algorithms based on bipartite projections have been devised to produce personalised recommendations (Zhou et al., 2007), while on the other hand, link prediction for graphs can be achieved via matrix factorisation (Menon and Elkan, 2011). The topology of useritem networks and their projections has been widely studied and shown to exhibit the properties of complex networks (e.g. (Cano et al., 2006)). However, The exact influence of the underlying network structure on the performance of recommender systems remains an open question (Zanin et al., 2009; Guo and Liu, 2010).
The link between hyperbolic geometry and complex networks was established in the seminal paper by (Krioukov et al., 2010) who show that ’hyperbolic geometry naturally emerges from network heterogeneity in the same way that network heterogeneity emerges from hyperbolic geometry’. If nodes are laid out uniformly at random in hyperbolic space and connected randomly as an inverse function of distance, then a complex network is obtained. Conversely, if the nodes of a complex network are treated as points in a latent metric space, where connections are more likely to form between closer nodes, then the network heterogeneous topology implies a latent hyperbolic geometry. A similar approach has recently been applied by the same authors to characterise bipartite graphs (Kitsak et al., 2017).
In table 1, we report the basic statistics of the bipartite networks underlying the recommendation datasets under study. We argue that, given the complex nature of these networks, a hyperbolic space is better suited to embed them than a Euclidean one. Finally, we note that it would be a remarkable coincidence, given the large number of possibilities, if Euclidean geometry were both the only geometry that practitioners had tried and the optimal geometry for these problems.
Data set  density  KS test  value  

Automotive  
Cell Phones and Accessories  
Clothing Shoes and Jewelry  
Musical Instruments  
Patio Lawn and Garden  
Sports and Outdoors  
Tools and Home Improvement  
Toys and Games  
MovieLens 100K  
MovieLens 20M  
ASOS.com Menswear 
: estimated exponent of the maximumlikelihood powerlaw fit, KS dist: KolmogorovSmirnov test statistic for the distance between the data and the fitted powerlaw, and
value of the test. Small values reject the hypothesis that the data could have been drawn from the fitted powerlaw distribution. The number of customers and products in the ASOS dataset are omitted for commercial reasons.5. Hyperbolic Recommender System
Here we outline the overall design and individual components of our hyperbolic recommendation system, before describing each element and our detailed design choices.
The recommender system is shown in Figure 3
. Raw data relating to customer interactions with products is stored in Microsoft Blob Storage and preprocessed into labelled customer interaction histories using Apache Spark. Hyperbolic representation learning is in Keras
(Chollet et al., 2015)with the TensorFlow
(et al., 2015) backend. The learned representations are made available to a realtime recommendation service using Cosmos DB from where they are presented to customers on web or app clients.At a high level, our recommendation algorithm is a neural network based recommender with a learning to rank loss that represents users and items, not as Euclidean vectors, but as points in hyperbolic space. It is trained on labelled customerproduct interaction histories , where the label is the next purchased product. As the ASOS dataset is highly asymmetric, having an order of magnitude more users than items, we do not explicitly represent users. Instead they are implicitly represented through an aggregate of the representations of the items they interact with (Cardoso et al., 2018). For the ASOS dataset, using an asymmetric approach reduces the number of model parameters by a factor of 20 and has the additional benefit that dynamic user representations allow users’ interests to change over time and with context.
Given this outline, our implementation contains four major components:

A loss function: A ranking function to optimise

A metric: Used to define itemitem or useritem similarity

An optimiser: e.g. Stochastic Riemannian gradient on the hyperboloid

An aggregator: To combine item representations into a user representation
We investigated several possible approaches for each component and these are detailed in the remainder of the section.
5.1. Loss Function
The baseline model for the hyperbolic recommender system is Bayesian Personalized Ranking (BPR) (Rendle et al., 2009). The BPR framework uses a triplet loss where indexes a user, indexes an item that they interact with and indexes a negative sample. The parameters , which constitute the embedding vectors are found through
(13) 
where (u,i,j) sums over all pairs of positive and negative items associated with each user and is given by
(14) 
where and are vectors representing user, and item respectively. We acknowledge the existence of a preprint by (Vinh et al., 2019) that addresses the recommendation problem in hyperbolic space using BPR. Their approach is symmetric and mirrors the setup from (Nickel and Kiela, 2017) for optimisation in the Poincaré ball. While (Vinh et al., 2019) claim better performance for their hyperbolic recommender system than a range of Euclidean baseline models, many of the performance metrics quoted for these baseline models are worse than random and the performance of their hyperbolic systems also appears to be lower than the standard naive baseline of recommending items based on their popularity (number of interactions) in the historic data.
An alternative to BPR is the Weighted MarginRank Batch (WMRB) loss (Liu and Natarajan, 2017), that first approximates the rank using a set of negative samples .
(15) 
where is the distance between and ,
is the ReLU activation and
is a slack parameter such that terms only contribute to the loss if .WMRB calculates a pseudoranking for the positive sample because contributions are only made to the sum when negative samples have higher scores, i.e. are to be placed before the positive example if ranked. The slack parameter can be learned, but our experiments indicate that the model is not sensitive to this value and we use . The loss function is then defined as:
(16) 
The logarithm is applied because ranking a positive sample with is almost as bad as from a user perspective. In pairwise ranking methods such as BPR, where only one negative example is sampled per positive example, it is quite likely that the positive example already has a higher rank than the negative example and thus there is nothing for the model to learn. In WMRB, where multiple negative samples are used per positive example, it is much more likely that an incorrectly ranked negative example has been sampled and therefore the model can make useful parameter updates. It has been demonstrated that WMRB leads to faster convergence than pairwise loss functions and improved performance on a set of benchmark recommendations tasks (Liu and Natarajan, 2017).
5.2. Metrics
Each model of hyperbolic space has a distance metric that could be used as the basis for a hyperbolic recommender system. We use the hyperboloid model as it is the best suited to stochastic gradient descentbased optimisation because it is unbounded and has closed form RSGD updates. These factors have been shown in previous work to lead to significantly more efficient optimisers (Wilson and Leimeister, 2018; Nickel and Kiela, 2018). The hyperboloid distance is given by
(17) 
As is not defined for and can occur due to numerical instability, care must be taken within the optimiser to either catch these cases, or use suitably high precision numbers (see (De Sa et al., 2018)). In addition, the derivative of the distance
(18) 
has the property that as because . In the asymmetric framework, this is guaranteed to happen to all users that have interacted with only a single item. To protect against infinities, it is possible to use a small margin leading to a distance function of
(19) 
As the hyperboloid distance is a monotone function of the Minkowski inner product and our objective is to rank points, the two are interchangeable. We generally favour the inner product as the gradient does not contain a singularity at and it is faster to compute.
5.3. Optimiser
The optimiser uses RSGD to perform gradient descent updates on the hyperboloid. There are three steps: (1) the inverse Minkowski metric is applied to Euclidean gradients of the loss function to give Minkowski gradients (2) are projected onto the tangent space to give tangent gradients (3) points on the manifold are updated by mapping from the tangent space to the manifold with learning rate through the exponential map :
(20)  
(21)  
(22) 
Additionally, points must be initialised on the hyperboloid. Previous work has either mapped a cube of Euclidean points in to the hyperboloid by fixing the first coordinate (Nickel and Kiela, 2018) or initialised within a small ball around the origin of the Poincaré ball model and then projected onto the hyperboloid (Wilson and Leimeister, 2018). We find that optimisation convergence can be accelerated by randomly assigning points within the Poincaré ball (prior to projection to the hyperboloid), but sampling the radius where is the frequency of occurance of item in the training data. Finally, we also apply some gradient norm clipping to the tangent vectors .
5.4. Item Aggregation
We are inspired by (Steck, 2015), where model complexity is reduced by eliminating the need to learn an embedding layer for users. Instead, vectors for users are computed as an intermediate representation by aggregating the vectors of the set of items they have interacted with. In Euclidean space, this can be done simply by taking the mean: where are a set of weights and in the simplest case .
As hyperbolic space is not a vector space, an alternative procedure is required. A choice suitable for all Riemannian manifolds is the Fréchet mean (Fréchet, 1948; Arnaudon et al., 2013), which finds the centerofmass, , of a cluster of points, , using the Riemannian distance, .
(23) 
The Fréchet mean is not directly calculable, but must be found through an optimisation procedure. Despite fast stochastic algorithms, this must be recalculated for every training step and would dominate the runtime.
To avoid this computational burden, we exploit the relationship between the hyperboloid model and the Minkowski spacetime of Einstein’s Special Theory of Relativity. Given the Lorentz group of isometrypreserving group actions, we can aggregate the useritemhistory by directly calculating the relativistic centerofmass (treating all items as having unit mass). This centerofmass is analogous to the “Einstein midpoint” (Ungar, 2009), which is most efficiently calculated in the Klein model, following projection from the hyperboloid. The midpoint is given by
(24) 
where
(25) 
Figure 4 shows a comparison of the Einstein midpoint to the Fréchet mean for a scan over , where . For each initial point all greater values of were compared. The Fréchet mean optimisation used gradient descent, for ten iterations. Agreement better than 0.3% is observed for all the aggregation points tested, with the precision limited by the number of gradient descent steps performed. Due to the close agreement and superior runtime complexity, the Einstein midpoint is used for item aggregation.
6. Evaluation
We report results from experiments on simulations, eight Amazon review datasets, the MovieLens 20M (Harper and Konstan, 2016), and finally a large ASOS proprietary dataset. Each experiment represents a milestone towards the development of fullscale hyperbolic recommender systems. Experiments report the standard recommender system metrics Hit Rate at 10 and Net Discount Cummulative Gain at 10, which we denote as HR@10 and NDCG@10 respectively.
6.1. Simulations
To demonstrate the viability of hyperbolic recommender systems, we present three small scale simulations. The simulations are generated using a symmetric, hyperboloid recommender with explicit user representations and the BPR loss. The embeddings are then projected onto the Poincaré disk to product the figures.
The first simulation (Figure 5, left column) consists of four users and four items clustered into two disjoint bipartite graphs An effective recommender system embeds users close to items they have purchased and distant from items they have not purchased. Therefore, we would expect the final embeddings to consist of two distinct groups, with users A and B and items 1 and 2 all embedded very close to one another, and users C and D and items 3 and 4 also embedded close to one another, but a large distance away from the first group. As can be seen, this is exactly what is learned by the symmetric hyperboloid recommender system.
In the second simulation (Figure 5, middle column), a third disjoint useritem graph is added to the system. Again, users and items within each group share very similar embeddings, with high intergroup separations.
In the third simulation (Figure 5, right columns), An additional item, labelled 7, that has been purchased by all six users is added. Consequently, an effective recommender system will learn a set of embeddings such that item 7 is close to all six users, while still maintaining a distance between users in each group and items that were bought exclusively by members of one of the other groups. As can be seen, the resulting set of embeddings learned by the symmetric hyperboloid recommender system is very similar to those produced in the second simulation, however, item 7 is embedded near the origin. This is consistent with previous work embedding treelike graphs (Nickel and Kiela, 2017; Chamberlain et al., 2017) as item 7 is effectively higher up the product hierarchy than items 16. This simple system highlights the strengths of using hyperbolic geometry for recommendations. All users are equally close to item 7, however due to the geodesic structure of the Poincaré ball, they are still a large distance away from the users and items in the other groups.
6.2. Amazon Review Datasets
Having demonstrated the viability of hyperbolic recommender systems for small simulations, we apply the same symmetric, hyperboloid BPR based model to the Amazon Review datasets and show that it outperforms the equivalent Euclidean model. We choose the Amazon datasets as our analysis of the underlying networks, presented in Table 1 and Figure 6, shows that these datasets are examples of complex networks.
Euclidean and hyperbolic methods are evaluated by training on all interactions from users with more than 20 interactions. The final performance is assessed on a held out test set composed of the most recent interaction each user has had with an item using HR@10 with 100 negative samples.
To ensure our benchmark emphasises the difference in the underlying geometry in the task of useritem recommendation, hyperparameter tuning for both Euclidean and hyperbolic models follows an identical procedure. The dimensionality of the embedding is at 50 and we search for optimal learning rates and regularization parameters for each geometry over
, and respectively. In all experiments we fix the batch size to 128 training samples, and use stochastic gradient descent.Data set  Hyperboloid  Euclidean 

Automotive  
Cell Phones and Accessories  
Clothing Shoes and Jewelry  
Musical Instruments  
Patio Lawn and Garden  
Sports and Outdoors  
Tools and Home Improvement  
Toys and Games 
For each system and each dataset, the optimal learning rate and regularization value is established by repeating each experiment for runs, and assessing the average HR@10, where we have used . Results are presented in Table 2. In all cases, we observe superior performance using hyperbolic geometry.
6.3. MovieLens20M Dataset
Given that hyperbolic recommendation systems outperform their Euclidean equivalents on datasets that have the structure of complex networks, the next milestone is to show that hyperbolic recommender systems can scale. To achieve scalability we adopt the asymmetric recommender paradigm, where customers are not represented explicitly, but as aggregates of product representations.
We assess the performance of our asymmetric hyperboloid recommender system using the MovieLens 20M dataset (Harper and Konstan, 2016), which contains integer movie ratings. To convert it into a form consisted with copurchasing data, we filter so that only useritem pairs in which the user has given the movie a rating of 4 or 5 are considered to be positive interactions. This results in ratings from users of (See Table 1 for dataset statistics). As with the Amazon dataset, we hold out each user’s most recent interaction to form a test set, and use each user’s second most recent interaction as a validation set. We evaluate our results using HR@10 and NDCG@10 with 100 negative examples.
We compare the asymmetric hyperboloid recommender system with the symmetric case using an embedding dimension of 50, a learning rate of 0.01 and a batch size of 1024 with stochastic gradient descent. The performance of the asymmetric system is roughly equivalent to the symmetric system, but the asymmetric system is able to learn in half the time using five times less parameters (Figure 8). Fast convergence is important in production recommender systems, where large datasets containing millions of users are retrained daily.
6.4. Proprietary Dataset
Finally, we assess the performance of the hyperboloid recommender system on an ASOS proprietary dataset, which consists of 28m interactions between ^{2}^{2}2the exact number can not be disclosed for commercial reasons users with items over a period of one year. Embeddings for a sample of this dataset in 2D hyperbolic space and projected into the Poincaré disc is shown in Figure 7. In the figure points are coloured by product type and scaled by item popularity with black stars showing the implicit customer representations.
In these experiments, the test set consisted of the last week of interactions, with the training and validation sets formed from the previous 51 weeks of data. The validation set consisted of interactions drawn uniformly at random in time, with the remainder forming the training set.
In all configurations, the runtime of the symmetric system was four times greater than the asymmetric system for a fixed number of epochs. With 50 embedding dimensions, learning rate of 0.05, batch size of 512 and training for a single epoch, we observed a test set HR@10 = 0.589 and NDCG@10 = 0.324, significantly better than random and demonstrating that the system can learn on large commercial datasets. However, this performance was worse than the equivalent Euclidean asymmetric recommender, which gave HR@10 = 0.639 and NDGC@10 = 0.393, when trained with the same hyperparameters.
Although the performance of the hyperboloid recommender did not surpass that of the Euclideanbased system, we believe these results are extremely promising. The performance of the hyperboloid recommender system could be significantly improved by applying adaptive learning rates, particularly through development of adaptive optimisation techniques that function on the hyperboloid. Improvements to the initialisation scheme used should also be investigated.
7. Conclusion
We have presented a novel hyperbolic recommendation system based on the hyperboloid model of hyperbolic geometry. Our approach was inspired by the intimate connections between hyperbolic geometry, complex networks and recommendation systems. We have shown that it consistently and significantly outperforms the equivalent Euclidean model using a popular public benchmark. We have also shown that by using the Einstein midpoints, it is possible to develop asymmetric hyperbolic recommender systems, which can scale to millions of users, achieving the same performance as symmetric systems, but with far fewer parameters and greatly reduced training times. We believe that future work to develop adaptive optimisers in hyperbolic space will lead to stateoftheart productiongrade hyperbolic recommender systems.
References
 (1)
 Adcock et al. (2013) Aaron B Adcock, Blair D Sullivan, and Michael W Mahoney. 2013. Treelike structure in large social and information networks. ICDM.
 Arnaudon et al. (2013) Marc Arnaudon, Frédéric Barbaresco, and Le Yang. 2013. Medians and Means in Riemannian Geometry: Existence, Uniqueness and Computation. Springer Berlin Heidelberg, Berlin, Heidelberg, 169–197.
 Barabási and Albert (1999) AlbertLászló Barabási and Réka Albert. 1999. Emergence of scaling in random networks. science 286, 5439 (1999), 509–512.
 Becigneul and Ganea (2019) Gary Becigneul and OctavianEugen Ganea. 2019. Riemannian Adaptive Optimisation Methods. In ICLR.
 Bläsius et al. (2016) Thomas Bläsius, Tobias Friedrich, and Anton Krohmer. 2016. Efficient Embedding of ScaleFree Graphs in the Hyperbolic Plane. European Symposium on Algorithms 16 (2016), 1–16.
 Boguna et al. (2010) Marian Boguna, Fragkiskos Papadopoulos, and Dmitri Krioukov. 2010. Sustaining the Internet with Hyperbolic Mapping. Nature Communications 1, 62 (2010), 62.
 Bonnabel (2013) Silvere Bonnabel. 2013. Stochastic gradient descent on riemannian manifolds. IEEE Trans. Automat. Control 58, 9 (2013), 2217–2229.
 Cannon et al. (1997) James W. Cannon, William J. Floyd, Richard Kenyon, and Walter R. Parry. 1997. Hyperbolic Geometry. 31 (1997), 59–115.
 Cano et al. (2006) Pedro Cano, Oscar Celma, Markus Koppenberger, and Javier M Buldu. 2006. Topology of Music Recommendation Networks. Chaos: An Interdisciplinary Journal of Nonlinear Science 16, 1 (2006), 013107.
 Cardoso et al. (2018) Ângelo Cardoso, Fabio Daolio, and Saúl Vargas. 2018. Product Characterisation towards Personalisation: Learning Attributes from Unstructured Data to Recommend Fashion Products. KDD (2018).
 Chamberlain et al. (2017) Benjamin Paul Chamberlain, James Clough, and Marc Peter Deisenroth. 2017. Neural Embeddings of Graphs in Hyperbolic Space. MLG (2017).
 Chollet et al. (2015) François Chollet et al. 2015. Keras. https://github.com/fchollet/keras.
 Clough and Evans (2016) James R. Clough and Tim S. Evans. 2016. What is the Dimension of Citation Space? Physica A: Statistical Mechanics and its Applications 448 (2016), 235–247.
 Clough et al. (2015) James R. Clough, Jamie Gollings, Tamar V. Loach, and Tim S. Evans. 2015. Transitive reduction of citation networks. Complex Networks 3, 2 (2015), 189–203.
 Cvetkovski and Crovella (2009) Andrej Cvetkovski and Mark Crovella. 2009. Hyperbolic Embedding and Routing for Dynamic Graphs. Proceedings  IEEE INFOCOM (2009), 1647–1655.
 De Sa et al. (2018) Christopher De Sa, Albert Gu, Christopher Ré, and Frederic Sala. 2018. Representation Tradeoffs for Hyperbolic Embeddings. In ICML. 4457–4466.
 Dhingra et al. (2018) Bhuwan Dhingra, Christopher J. Shallue, Mohammad Norouzi, Andrew M. Dai, and George E. Dahl. 2018. Embedding Text in Hyperbolic Spaces. (2018), 59–69.
 Erdos and Renyi (1960) Paul Erdos and Alfred Renyi. 1960. On the Evolution of Random Graphs. Public Mathethmatics Institute Hungarian Academy of Science 5, 1 (1960), 17–60.

et al. (2015)
Martín Abadi et al.
2015.
TensorFlow: LargeScale Machine Learning on Heterogeneous Systems.
 Fréchet (1948) Maurice René Fréchet. 1948. Les éléments aléatoires de nature quelconque dans un espace distancié. Annales de l’institut Henri Poincaré 10, 4 (1948), 215–310.
 Ganea et al. (2018a) OctavianEugen Ganea, Gary Bécigneul, and Thomas Hofmann. 2018a. Hyperbolic Entailment Cones for Learning Hierarchical Embeddings. In ICML.
 Ganea et al. (2018b) OctavianEugen Ganea, Gary Bécigneul, and Thomas Hofmann. 2018b. Hyperbolic Neural Networks. In NeurIPS.
 GomezUribe and Hunt (2015) Carlos A. GomezUribe and Neil Hunt. 2015. The Netflix Recommender System. ACM Transactions on Management Information Systems 6, 4 (2015), 1–19.
 Gromov (2007) Mikhail Gromov. 2007. Metric Structures for Riemannian and Nonriemannian Spaces.
 Gu et al. (2019) Albert Gu, Frederic Sala, Beliz Gunel, and Christopher Ré. 2019. Learning MixedCurvature Representations in Product Spaces. In ICLR.
 Guillaume and Latapy (2006) JeanLoup Guillaume and Matthieu Latapy. 2006. Bipartite graphs as models of complex networks. Physica A: Statistical Mechanics and its Applications 371, 2 (2006), 795–813.
 Gulcehre et al. (2019) Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo, Adam Santoro, and Nando de Freitas. 2019. Hyperbolic Attention Networks. In ICLR.
 Guo and Liu (2010) Qiang Guo and JianGuo Liu. 2010. Clustering Effect of UserObject Bipartite Network on Personalized Recommendation. International Journal of Modern Physics C 21, 07 (2010), 891–901.
 Harper and Konstan (2016) F Maxwell Harper and Joseph A Konstan. 2016. The Movielens Datasets: History and Context. TIIS 5, 4 (2016), 19.
 Hu et al. (2008) Yifan Hu, Yehuda Koren, Chris Volinsky, Florham Park, Yehuda Koren, and Chris Volinsky. 2008. Collaborative Filtering for Implicit Feedback Datasets. In ICDM. 263–272.
 Kitsak et al. (2017) Maksim Kitsak, Fragkiskos Papadopoulos, and Dmitri Krioukov. 2017. Latent geometry of bipartite networks. Physical Review E 95, 3 (2017), 032309.
 Kleinberg (2007) Robert Kleinberg. 2007. Geographic Routing Using Hyperbolic Space. Proc. IEEE INFOCOM 2007 (2007), 1902–1909.
 Krioukov et al. (2010) Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, and Amin Vahdat. 2010. Hyperbolic Geometry of Complex Networks. Physical Review E 82, 3 (2010), 036106.
 Leimeister and Wilson (2018) Matthias Leimeister and Benjamin J Wilson. 2018. SkipGram Word Embeddings in Hyperbolic Space. arXiv preprint arXiv:1809.01498 (2018).
 Liu and Natarajan (2017) Kuan Liu and Prem Natarajan. 2017. WMRB: Learning to Rank in a Scalable Batch Training Approach. arXiv preprint arXiv:1711.04015 (2017).
 Menon and Elkan (2011) Aditya Krishna Menon and Charles Elkan. 2011. Link Prediction via Matrix Factorization. In ECMLPKDD. Springer, 437–452.
 Newman (2003) Mark EJ Newman. 2003. The Structure and Function of Complex Networks. SIAM review 45.2 (2003), 167–256.
 Nickel and Kiela (2017) Maximilian Nickel and Douwe Kiela. 2017. Poincaré Embeddings for Learning Hierarchical Representations. In Nips. 6338–6347.
 Nickel and Kiela (2018) Maximilian Nickel and Douwe Kiela. 2018. Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry. In ICML.

Rendle et al. (2009)
Steffen Rendle, Christoph
Freudenthaler, Zeno Gantner, and Lars
SchmidtThieme. 2009.
BPR: Bayesian personalized ranking from implicit
feedback. In
Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence
. 452–461.  Sarwar et al. (2000) Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2000. Application of Dimensionality Reduction in Recommender Systems: A Case Study. Technical Report. Minnesota Univ Minneapolis Dept of Computer Science.
 Shavitt and Tankel (2008) Yuval Shavitt and Tomer Tankel. 2008. Hyperbolic Embedding of Internet Graph for Distance Estimation and Overlay Construction. IEEE/ACM Transactions on Networking 16, 1 (2008), 25–36.
 Steck (2015) Harald Steck. 2015. Gaussian Ranking by Matrix Factorization. In RecSys. ACM Press, 115–122.
 Tifrea et al. (2019) Alexandru Tifrea, Gary Bécigneul, and OctavianEugen Ganea. 2019. Poincare GloVe: Hyperbolic Word Embeddings. In ICLR.
 Ungar (2009) Abraham Ungar. 2009. A Gyrovector Space Approach to Hyperbolic Geometry. Morgan & Claypool Publishers, San Rafael.
 Vinh et al. (2019) Tran Dang Quang Vinh, Yi Tay, Shuai Zhang, Gao Cong, and XiaoLi Li. 2019. Hyperbolic Recommender Systems. In arXiv preprint arXiv:1809.01703.
 Wilson and Leimeister (2018) Benjamin Wilson and Matthias Leimeister. 2018. Gradient Descent in Hyperbolic Space. arXiv preprint arXiv:1805.08207 (2018).
 Zanin et al. (2009) Massimiliano Zanin, Pedro Cano, Oscar Celma, and Javier M Buldu. 2009. Preferential Attachment, Aging and Weights in Recommendation Systems. International Journal of Bifurcation and Chaos 19, 02 (2009), 755–763.
 Zhou et al. (2007) Tao Zhou, Jie Ren, Matúš Medo, and YiCheng Zhang. 2007. Bipartite Network Projection and Personal Recommendation. Physical Review E 76, 4 (2007), 046115.
8. Reproducibility Guidance
This section contains detailed instructions to aid in the reproduction of our experimental results. All code and data used in the experiments are available on request.
8.1. Evaluation Metrics
We evaluate our results using the Hit Rate (HR) at 10 and Normalised Discount Cummulative Gain (NDCG) at 10. to calculate hit rate, each positive example in the held out set is ranked along with 100 uniformly sampled negative examples that the user has not interacted with, the proportion of cases a positive example is ranked in the top 10 closest to the user (”hits”) yields the performance of a system. NDCG@10 sums the relevance of the first 10 items discounted by the log of their position and normalised by the NDCG@10 of the ideal recommender.
8.2. Simulated Experiments
The simulations use a 2dimensional hyperboloid, the BPR loss, a learning rate of 1, a decay rate of 0.02 and an initialisation width of 0.01.
8.3. Amazon Review Datasets Experiments
All experiments were conducted using Python3 and Torch1.0 on Ubuntu 16.04, with a Tesla 2xK80  16Gb Ram.
Optimal parameters for the hyperboloid model were found to be for the learning rate, and for the regularisation parameter. It was found to vary in the case of the Euclidean model, with respectively on automotive, on cellphones, on patio, on clothing, on musical, on toys, on tools, on sport. The minibatch size used was 128, with the models trained for 10 epochs. Only plain updates were considered, where the learning rate was held constant at each epoch. The test set is composed of every last positive interaction a user has had (with a rating score ). Positive interactions not seen during training were removed from the test set to ensure the performance of a model only reflects interactions that were fully optimised.
8.4. MovieLens20M Dataset Experiments
The analysis of the asymmetric and symmetric datasets on the MovieLens20M dataset were conducted with the following parameters: Gradients were clipped in the tangent space to norm 1, learning rates were 0.1 using SGD, embedding dimension was 50 and the loss was WRMB with 100 negative samples and regularisation of 0.01. Embeddings were initialised uniformly at random into a hypercube of width 0.001 and then projected onto the hyperboloid if appropriate.
8.5. Derivatives of the Loss Function
Here we cover the case for the WMRB loss using the hyperboloid distance. The gradients using the inner product are the same with the derivative removed and largely similar for the BPR loss function.
The wmrb ranking loss is given by
(26) 
where is the distance between and , is the relu function and is a slack parameter such that terms only contribute to the loss if . The loss function is given by
(27) 
(28) 
We denote as the condition that must be satisfied for updates to occur and , then
(30) 
updates for are similarly
(31) 
However, updates of are more complex
(32) 
where the derivative propagates through the user representation to its component item embeddings as follows:
(33) 
Comments
There are no comments yet.