PinnerSage: Multi-Modal User Embedding Framework for Recommendations at Pinterest

07/07/2020 ∙ by Aditya Pal, et al. ∙ Pinterest, Inc. 7

Latent user representations are widely adopted in the tech industry for powering personalized recommender systems. Most prior work infers a single high dimensional embedding to represent a user, which is a good starting point but falls short in delivering a full understanding of the user's interests. In this work, we introduce PinnerSage, an end-to-end recommender system that represents each user via multi-modal embeddings and leverages this rich representation of users to provides high quality personalized recommendations. PinnerSage achieves this by clustering users' actions into conceptually coherent clusters with the help of a hierarchical clustering method (Ward) and summarizes the clusters via representative pins (Medoids) for efficiency and interpretability. PinnerSage is deployed in production at Pinterest and we outline the several design decisions that makes it run seamlessly at a very large scale. We conduct several offline and online A/B experiments to show that our method significantly outperforms single embedding methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Pinterest is a content discovery platform that allows 350M+ monthly active users to collect and interact with 2B+ visual bookmarks called pins. Each pin is an image item associated with contextual text, representing an idea that users can find and bookmark from around the world. Users can save pins on boards to keep them organized and easy to find. With billions of pins on Pinterest, it becomes crucial to help users find those ideas (Pins) which would spark inspiration. Personalized recommendations thus form an essential component of the Pinterest user experience and is pervasive in our products. The Pinterest recommender system spans a variety of algorithms that collectively define the experience on the homepage. Different algorithms are optimized for different objectives and include -  (a) homefeed recommendations where a user can view an infinite recommendation feed on the homepage (as shown in Figure 1), (b) shopping product recommendations which link to 3rd party e-commerce sites, (c) personalized search results, (d) personalized ads, (e) personalized pin board recommendations, etc. Therefore, it becomes necessary to develop a universal, shareable and rich understanding of the user interests to power large-scale cross-functional use cases at Pinterest.

Figure 1. Pinterest Homepage.

Latent user representation methods have become increasingly important in advancing our understanding of users. They are shown to be effective at powering collaborative filtering techniques (Sarwar et al., 2001; Linden et al., 2003), and serving as features in ranking models (Cheng et al., 2016; Wu et al., 2018; Covington et al., 2016; You et al., 2019). Due to their efficacy, user embeddings are widely adopted in various industry settings. They are used to power YouTube and Google play recommendations (Covington et al., 2016; Cheng et al., 2016), personalize search ranking and similar listing recommendations at Airbnb (Grbovic and Cheng, 2018), recommend news articles to users (Okura et al., 2017), connect similar users at Etsy (Zhao et al., 2018), etc. Building an effective user embedding system that provides personalized recommendations to hundreds of millions of users from a candidate pool of billions of items has several inherent challenges. The foremost challenge is how to effectively encode multiple facets of a user?

A user can have a diverse set of interests with no obvious linkage between them. These interests can evolve, with some interests persisting long term while others spanning a short time period. Most of the prior work aims to capture the rich diversity of a user’s actions and interests via a single high-dimensional embedding vector. Typically, items to be recommended are also represented in the same embedding space. This is initially satisfying, but as pointed by research work described in

(Baltrunas and Amatriain, 2009; Weston et al., 2013; Liu et al., 2019; Epasto and Perozzi, 2019), a good embedding must encode user’s multiple tastes, interests, styles, etc., whereas an item (a video, an image, a news article, a house listing, a pin, etc) typically only has a single focus. Hence, an attention layer (Zhang et al., 2018) or other context adapting approaches is needed to keep track of the evolving interests of the users. One alternative that has shown promise is to represent a user with multiple embeddings, with each embedding capturing a specific aspect of the user. As shown by (Weston et al., 2013), multi-embedding user representations can deliver 25% improvement in YouTube video recommendations. (Liu et al., 2019) also shows reasonable gains on small benchmark datasets. However, multi-embedding models are not widely adopted in the industry due to several important questions and concerns that are not yet fully addressed by prior work:

  • How many embeddings need to be considered per user?

  • How would one run inference at scale for hundreds of millions of users and update the embeddings ?

  • How to select the embeddings to generate personalized recommendations?

  • Will the multi-embedding models provide any significant gains in a production setting?

Most of the prior multi-embedding work side-steps these challenges by either running only small-scale experiments and not deploying these techniques in production or by limiting a user to very few embeddings, thereby restricting the utility of such approaches. Present Work. In this paper, we present an end-to-end system, called PinnerSage, that is deployed in production at Pinterest. PinnerSage is a highly scalable, flexible and extensible recommender system that internally represents each user with multiple PinSage (Ying et al., 2018) embeddings. It infers multiple embeddings via hierarchical clustering of users’ actions into conceptual clusters and uses an efficient representation of the clusters via medoids. Then, it employs a highly efficient nearest neighbor system to power candidate generation for recommendations at scale. Finally, we evaluate PinnerSage extensively via offline and online experiments. We conduct several large scale A/B tests to show that PinnerSage based recommendations result in significant engagement gains for Pinterest’s homefeed, and shopping product recommendations

2. PinnerSage Design Choices

We begin by discussing important design choices of PinnerSage. Design Choice 1: Pin Embeddings are Fixed. Most prior work, jointly learns user and item embeddings (e.g.  (Covington et al., 2016; Grbovic and Cheng, 2018)). This causes inherent problems in large-scale applications, such that it unnecessarily complicates the model, slows the inference computation, and brings in difficulty for real-time updates. Besides these, we argue that it can often lead to less desirable side-effects. Consider the toy example in Figure 2, where a user is interested in painting, shoes, and sci-fi. Jointly learnt user and pin embeddings would bring pin embeddings on these disparate topics closer, which is actually what we wish to avoid. Pin embeddings should only operate on the underlying principle of bringing similar pins closer while keeping the rest of the pins as far as possible. For this reason, we use PinSage (Ying et al., 2018), which precisely achieves this objective without any dilution. PinSage is a unified pin embedding model, which integrates visual signals, text annotations, and pin-board graph information to generate high quality pin embeddings. An additional advantage of this design choice is that it considerably simplifies our downstream systems and inference pipelines.

Figure 2. Pins of 256-dimensional embeddings visualized in 2D. These pins depict three different interests of a user.

Design Choice 2: No Restriction on Number of Embeddings. Prior work either fixes the number of embeddings to a small number (Weston et al., 2013) or puts an upper bound on them (Liu et al., 2019). Such restrictions at best hinders developing a full understanding of the users and at worst merges different concepts together leading to bad recommendations. For example, merging item embeddings, which is considered reasonable (see (Covington et al., 2016; You et al., 2019)), could yield an embedding that lies in a very different region. Figure 2 shows that a merger of three disparate pin embeddings results in an embedding that is best represented by the concept energy boosting breakfast. Needless to say, recommendations based on such a merger can be problematic. Our work allows a user to be represented by as many embeddings as their underlying data supports. This is achieved by clustering users’ actions into conceptually coherent clusters via a hierarchical agglomerative clustering algorithm (Ward). A light user might get represented by 3-5 clusters, whereas a heavy user might get represented by 75-100 clusters. Design Choice 3: Medoids based Representation of Clusters

. Typically, clusters are represented by centroid, which requires storing an embedding. Additionally, centroid could be sensitive to outliers in the cluster. To compactly represent a cluster, we pick a cluster member pin, called medoid. Medoid by definition is a member of the user’s originally interacted pin set and hence avoids the pit-fall of topic drift and is robust to outliers. From a systems perspective, medoid is a concise way of representing a cluster as it only requires storage of medoid’s pin id and also leads to cross-user and even cross-application cache sharing.

Design Choice 4: Medoid Sampling for Candidate Retrieval. PinnerSage provides a rich representation of a user via cluster medoids. However, in practice we cannot use all the medoids simultaneously for candidate retrieval due to cost concerns. Additionally, the user would be bombarded with too many different items. Due to cost concerns, we sample 3 medoids proportional to their importance scores (computation described in later sections) and recommend their nearest neighboring pins. The importance scores of medoids are updated daily and they can adapt with changing tastes of the user. Design Choice 5: Two-pronged Approach for Handling Real-Time Updates. It is important for a recommender system to adapt to the current needs of their users. At the same time an accurate representation of users requires looking at their past 60-90 day activities. Sheer size of the data and the speed at which it grows makes it hard to consider both aspects together. Similar to (Grbovic and Cheng, 2018), we address this issue by combining two methods: (a) a daily batch inference job that infers multiple medoids per user based on their long-term interaction history, and (b) an online version of the same model that infers medoids based on the users’ interactions on the current day. As new activity comes in, only the online version is updated. At the end of the day, the batch version consumes the current day’s activities and resolves any inconsistencies. This approach ensures that our system adapts quickly to the users’ current needs and at the same time does not compromise on their long-term interests. Design Choice 6: Approximate Nearest Neighbor System. To generate embeddings based recommendations, we employ an approximate nearest neighbor (ANN) system. Given a query (medoid), the ANN system fetches pins closest to the query in the embedding space. We show how several improvements to the ANN system, such as filtering low quality pins, careful selection of different indexing techniques, caching of medoids; results in the final production version having 1/10 the cost of the original prototype.

3. Our Approach

Notations. Let the set represent all the pins at Pinterest. The cardinality of is in order of billions. Here, denotes the -dimensional PinSage embedding of the pin. Let be the sequence of action pins of user , such that for each , user either repinned, or clicked pin at time . For the sake of simplicity, we drop the subscript as we formulate for a single user , unless stated otherwise. We consider the action pins in to be sorted based on action time, such that is the pin id of the first action of the user. Main Assumption: Pin Embeddings are Fixed. As mentioned in our design choice 1 (Section 2), we consider pin embeddings to be fixed and generated by a black-box model. Within Pinterest, this model is PinSage (Ying et al., 2018) that is trained to place similar pins nearby in the embedding space with the objective of subsequent retrieval. This assumption is ideal in our setting as it considerably simplifies the complexity of our models. We also made a similar assumption in our prior work (You et al., 2019). Main Goal. Our main goal is to infer multiple embeddings for each user, , where for all , given a user’s actions and pins embeddings . Since pin embeddings are fixed and hence not jointly inferred, we seek to learn each compatible with pin embeddings – specifically for the purpose of retrieving similar pins to . For different users, the number of embeddings can be different, i.e. need not be same as . However, for our approach to be practically feasible, we require the number of embeddings to be in order of tens to hundreds ().
To show the promise of the clustering-based approach, we consider a task of predicting the next user action. We have access to the user’s past actions and our goal is to predict the next action that the user is going to interact with from a corpus of billions of pins. To simplify this challenge, we measure the performance by asking how often is the distance between the user-embedding and the pin embedding is above a cosine threshold of . We compare four single embedding approaches:

  1. Last pin: User representation is the embedding of the last action pin ().

  2. Decay average: User representation is a time-decayed average of embeddings of their action pins. Specifically, decay average embedding .

  3. Oracle: Oracle can “look into the future” and pick as the user representation the past action pin of the user that is closest to . This measures the upper bound on accuracy of a method that would predict future engagements based on past engagements.

  4. Kmeans Oracle

    : User is represented via k-means clustering (

    ) over their past action pins. Again, the Oracle gets to see pin and picks as the user representation the cluster centroid closest to it.

Models Accuracy Lift
Last pin 0%
Decay average 25%
Kmeans Oracle 98%
Oracle (practical upper-bound) 140%
Table 1. Accuracy lift of models on predicting the next user action. The lifts are computed w.r.t. last pin model.
Figure 3. Snapshot of action pins (repins or clicks

) of a random user. Cosine score for a pin is the cosine similarity between its embedding and the latest pin. Age of a pin is the number of days elapsed from the action date to the data collection date.

Table 1 shows the results. Oracle model provides substantial accuracy gains, deservedly so as it can look at the future pin. However, its superior performance is only possible because it is able to recall the embedding of all the pins (obviously not practical from a systems point-of-view). Interestingly, a clustering based Oracle that only has to recall cluster centroid embeddings improves over the baselines by a large margin. This result is not entirely surprising because users have multiple interests and they often switch between those interests. Figure 3 depicts such an example, which is replete in our setting. We note that none of the past 5 pins correlate with the latest pin and one has to look further back to find stronger correlations. Hence, single embedding models with limited memory fail at this challenge.

3.1. PinnerSage

We draw two key insights from the previous task: (i) It is too limiting to represent a user with a single embedding, and (ii) Clustering based methods can provide a reasonable trade-off between accuracy and storage requirement. These two observations underpin our approach, which has the following three components.

  1. Take users’ action pins from the last 90 days and cluster them into a small number of clusters.

  2. Compute a medoid based representation for each cluster.

  3. Compute an importance score of each cluster to the user.

3.1.1. Step 1: Cluster User Actions

We pose two main constraints on our choice of clustering methods.

  • The clusters should only combine conceptually similar pins.

  • It should automatically determine the number of clusters to account for the varying number of interests of a user.

The above two constraints are satisfied by Ward (Ward, 1963)

, which is a hierarchical agglomerative clustering method, that is based on a minimum variance criteria (satisfying our constraint 1). Additionally, the number of clusters is automatically determined based on the distances (satisfying our constraint 2). In our experiments (sec. 

5), it performed better than K-means and complete linkage methods (Defays, 1977). Several other benchmark tests also establish the superior performance of Ward over other clustering methods 111https://jmonlong.github.io/Hippocamplus/2018/02/13/tsne-and-clustering/.

Input :  - action pins of a user
- cluster merging threshold
Output : Grouping of input pins into clusters
// Initial set up: each pin belongs to its own cluster
Set
Set
while  do // put first pin from (without removing it) to the stack
while  do
if  then
if  then // merge clusters and
; stack.pop(); //remove and
merge_history.add()
Set ,
end if
end if
if  then // push first element of in the stack
end if
end while
end while
Sort tuples in in decreasing order of Set
foreach  merge_history do if  and  then // add to the set of clusters
Set end if
end foreach
return
Algorithm 1 Ward()

Our implementation of Ward is adapted from the Lance-Williams algorithm (Lance and Williams, 1967) which provided an efficient way to do the clustering. Initially, Algorithm 1 assigns each pin to its own cluster. At each subsequent step, two clusters that lead to a minimum increase in within-cluster variance are merged together. Suppose after some iterations, we have clusters with distance between clusters and represented as . Then if two clusters and are merged, the distances are updated as follows:

(1)

where is the number of pins in cluster . Computational Complexity of Ward Clustering Algorithm. The computational complexity of Ward clustering is , where . To see this, we note that in every outer while loop a cluster is added to the empty stack. Now since a cluster cannot be added twice to the stack (see Appendix, Lemma 8.2), the algorithm has to start merging the clusters once it cycles through all the clusters (worst case). The step that leads to addition of a cluster to the stack or merging of two clusters has a computational cost of . The algorithm operates with initial clusters and then intermediate merged clusters as it progresses, leading to the total computational complexity of .

3.1.2. Step 2: Medoid based Cluster Representation

After a set of pins are assigned to a cluster, we seek a compact representation for that cluster. A typical approach is to consider cluster centroid, time decay average model or a more complex sequence models such as LSTM, GRU, etc. However one problem with the aforementioned techniques is that the embedding inferred by them could lie in a very different region in the -dimensional space. This is particularly true if there are outlier pins assigned to the cluster, which could lead to large with-in cluster variance. The side-effect of such an embedding would be retrieval of non-relevant candidates for recommendation as highlighted by Figure 2. We chose a more robust technique that selects a cluster member pin called as medoid to represent the cluster. We select the pin that minimizes the sum of squared distances with the other cluster members. Unlike centroid or embeddings obtained by other complex models, medoid by definition is a point in the -dimensional space that coincides with one of the cluster members. Formally,

(2)

An additional benefit of medoid is that we only need to store the index of the medoid pin as its embedding can be fetched on demand from an auxiliary key-value store.

Set
foreach  do Set
Set end foreach
return
Algorithm 2 PinnerSage ()

3.1.3. Step 3: Cluster Importance

Even though the number of clusters for a user is small, it can still be in order of tens to few hundreds. Due to infra-costs, we cannot utilize all of them to query the nearest neighbor system; making it essential to identify the relative importance of clusters to the user so we can sample the clusters by their importance score. We consider a time decay average model for this purpose:

(3)

where is the time of action on pin by the user. A cluster that has been interacted with frequently and recently will have higher importance than others. Setting puts more emphasis on the frequent interests of the user, whereas puts more emphasis on the recent interests of the user. We found to be a good balance between these two aspects. Algorithm 2 provides an end-to-end overview of PinnerSage model. We note that our model operates independently for each user and hence it can be implemented quite efficiently in parallel on a MapReduce based framework. We also maintain an online version of PinnerSage that is run on the most recent activities of the user. The output of the batch version and the online version are combined together and used for generating the recommendations.

4. PinnerSage Recommendation System

PinnerSage can infer as many medoids for a user as the underlying data supports. This is great from a user representation point of view, however not all medoids can be used simultaneously at any given time for generating recommendations. For this purpose, we consider importance sampling. We sample a maximum of medoids per user at any given time. The sampled medoids are then used to retrieve candidates from our nearest-neighbor system. Figure 4 provides an overview of PinnerSage recommendation system.

Figure 4. PinnerSage Recommendation System.

4.1. Approximate Nearest Neighbor Retrieval

Within Pinterest, we have an approximate nearest neighbor retrieval system (ANN) that maintains an efficient index of pin embeddings enabling us to retrieve similar pins to an input query embedding. Since it indexes billions of pins, ensuring that its infrastructure cost and latency is within the internally prescribed limits is an engineering challenge. We discuss a few tricks that have helped ANN become a first-class citizen alongside other candidate retrieval frameworks, such as Pixie (Eksombatchai et al., 2018).

4.1.1. Indexing Scheme

Many different embedding indexing schemes (see (Johnson et al., 2017)) were evaluated, such as LSH Orthoplex (Athitsos et al., 2008; Terasawa and Tanaka, 2007), Product Quantization (Babenko and Lempitsky, 2016), HNSW (Malkov and Yashunin, 2018), etc. We found HNSW to perform best on cost, latency, and recall. Table 2 shows that HNSW leads to a significant cost reduction over LSH Orthoplex. Superior performance of HNSW has been reported on several other benchmark datasets as well 222https://erikbern.com/2018/06/17/new-approximate-nearest-neighbor-benchmarks.html. Candidate Pool Refinement. A full index over billions of pins would result in retrieving many near-duplicates. These near duplicates are not desirable from recommendation purposes as there is limited value in presenting them to the user. Furthermore, some pins can have intrinsically lower quality due to their aesthetics (low resolution or large amount of text in the image). We filter out near duplicates and lower quality pins via specialized in-house models. Table 2 shows that index refinement leads to a significant reduction in serving cost. Caching Framework. All queries to the ANN system are formulated in pin embedding space. These embeddings are represented as an array of floating point values that are not well suited for caching. On the other-hand, medoid’s pin id is easy to cache and can reduce repeated calls to the ANN system. This is particularly true for popular pins that appear as medoids for multiple users. Table 2 shows the cost reduction of using medoids over centroids.

Optimization Technique Cost
LSH Orthoplex HNSW -60%
Full Index Index Refinement -50%
Cluster Centroid Medoid -75%
Table 2. Relative Cost Benefits of Optimization Techniques.

4.2. Model Serving

The main goal of PinnerSage is to recommend relevant content to the users based on their past engagement history. At the same time, we wish to provide recommendations that are relevant to actions that a user is performing in the real-time. One way to do this is by feeding all the user data to PinnerSage and run it as soon as user takes an action. However this is practically not feasible due to cost and latency concerns: We consider a two pronged approach:

  1. Daily Batch Inference: PinnerSage is run daily over the last 90 day actions of a user on a MapReduce cluster. The output of the daily inference job (list of medoids and their importance) are served online in key-value store.

  2. Lightweight Online Inference: We collect the most recent 20 actions of each user on the latest day (after the last update to the entry in the key-value store) for online inference. PinnerSage uses a real-time event-based streaming service to consume action events and update the clusters initiated from the key-value store.

In practice, the system optimization plays a critical role in enabling the productionization of PinnerSage. Table 2

shows a rough estimation of cost reduction observed during implementation. While certain limitations are imposed in the PinnerSage framework, such as a two pronged update strategy, the architecture allows for easier improvements to each component independently.

5. Experiment

Here we evaluate PinnerSage and empirically validate its performance. We start with qualitative assessment of PinnerSage, followed by A/B experiments and followed by extensive offline evaluation.

Figure 5. PinnerSage clusters of an anonymous user.
Figure 6. Sample recommendations generated by PinnerSage for the top clusters 5.

5.1. PinnerSage Visualization

Figure 5 is a visualization of PinnerSage clusters for a given user. As can be seen, PinnerSage does a great job at generating conceptually consistent clusters by grouping only contextually similar pins together. Figure 6 provides an illustration of candidates retrieved by PinnerSage. The recommendation set is a healthy mix of pins that are relevant to the top three interests of the user: shoes, gadgets, and food. Since this user has interacted with these topics in the past, they are likely to find this diverse recommendation set interesting and relevant.

Experiment Volume Propensity
Homefeed +4% +2%
Shopping +20% +8%
Table 3. A/B experiments across Pinterest surfaces. Engagement gain of PinnerSage vs current production system.

5.2. Large Scale A/B Experiments

We ran large scale A/B experiments where users are randomly assigned either in control or experiment groups. Users assigned to the experiment group experience PinnerSage recommendations, while users in control get recommendations from the single embedding model (decay average embedding of action pins). Users across the two groups are shown equal number of recommendations. Table 3 shows that PinnerSage provides significant engagement gains on increasing overall engagement volume (repins and clicks) as well as increasing engagement propensity (repins and clicks per user). Any gain can be directly attributed to increased quality and diversity of PinnerSage recommendations.

5.3. Offline Experiments

We conduct extensive offline experiments to evaluate PinnerSage and its variants w.r.t. baseline methods. We sampled a large set of users (tens of millions) and collected their past activities (actions and impressions) in the last 90 days. All activities before the day are marked for training and from onward for testing. Baselines. We compare PinnerSage with the following baselines: (a) single embedding models, such as last pin, decay avg with several choices of (), LSTM, GRU, and HierTCN(You et al., 2019), (b) multi-embedding variants of PinnerSage with different choices of (i) clustering algorithm, (ii) cluster embedding computation methods, and (iii) parameter for cluster importance selection. Similar to (You et al., 2019)

, baseline models are trained with the objective of ranking user actions over impressions with several loss functions (l2, hinge, log-loss, etc). Additionally, we trained baselines with several types of negatives besides impressions, such as random pins, popular pins, and hard negatives by selecting pins that are similar to action pins.

Evaluation Method. The embeddings inferred by a model for a given user are evaluated on future actions of that user. Test batches are processed in chronological order, first day , then day , and so on. Once evaluation over a test batch is completed, that batch is used to update the models; mimicking a daily batch update strategy.

5.3.1. Results on Candidate Retrieval Task.

Our main use-case of user embeddings is to retrieve relevant candidates for recommendation out of a very large candidate pool (billions of pins). The candidate retrieval set is generated as follows: Suppose a model outputs embeddings for a user, then nearest-neighbor pins are retrieved per embedding, and finally the retrieved pins are combined to create a recommendation set of size (due to overlap it can be less than 400). The recommendation set is evaluated with the observed user actions from the test batch on the following two metrics:

  1. Relevance (Rel.) is the proportion of observed action pins that have high cosine similarity () with any recommended pin. Higher relevance values would increase the chance of user finding the recommended set useful.

  2. Recall is the proportion of action pins that are found in the recommendation set.

Table 4 shows that PinnerSage is more effective at retrieving relevant candidates across all baselines. In particular, the single embedding version of PinnerSage is better than the state-of-art single embedding sequence methods. Amongst PinnerSage variants, we note that Ward performs better than K-means and complete link methods. For cluster embedding computation, both sequence models and medoid selection have similar performances, hence we chose medoid as it is easier to store and has better caching properties. Cluster importance with , which is same as counting the number of pins in a cluster, performs worse than (our choice). Intuitively this makes sense as higher value of incorporates recency alongside frequency. However, if is too high then it over-emphasize recent interests, which can compromise on long-term interests leading to a drop in model performance ( vs ).

Rel. Recall
Last pin model 0% 0%
Decay avg. model () 28% 14%
Sequence models (HierTCN) 31% 16%
PinnerSage (sample embedding) 33% 18%
PinnerSage (K-means(k=5)) 91% 68%
PinnerSage (Complete Linkage) 88% 65%
PinnerSage (embedding = Centroid) 105% 81%
PinnerSage (embedding = HierTCN) 110% 88%
PinnerSage (importance ) 97% 72%
PinnerSage (importance ) 94% 69%
PinnerSage (Ward, Medoid, ) 110% 88%
Table 4. Lift relative to last pin model for retrieval task.

5.3.2. Results on Candidate Ranking Task.

A user embedding is often used as a feature in a ranking model especially to rank candidate pins. The candidate set is composed of action and impression pins from the test batch. To ensure that every test batch is weighted equally, we randomly sample 20 impressions per action. In the case when there are less than 20 impressions in a given test batch, we add random samples to maintain the 1:20 ratio of actions to impressions. Finally the candidate pins are ranked based on the decreasing order of their maximum cosine similarity with any user embedding. A better embedding model should be able to rank actions above impressions. This intuition is captured via the following two metrics:

  1. R-Precision (R-Prec.) is the proportion of action pins in top-, where

    is the number of actions considered for ranking against impressions. RP is a measure of signal-to-noise ratio amongst the top-

    ranked items.

  2. Reciprocal Rank (Rec. Rank) is the average reciprocal rank of action pins. It measures how high up in the ranking are the action pins placed.

Table 5 shows that PinnerSage significantly outperforms the baselines, indicating the efficacy of user embeddings generated by it as a stand-alone feature. With regards to single embedding models, we make similar observations as for the retrieval task: single embedding PinnerSage infers a better embedding. Amongst PinnerSage variants, we note that the ranking task is less sensitive to embedding computation and hence centroid, medoid and sequence models have similar performances as the embeddings are only used to order pins. However it is sensitive to cluster importance scores as that determines which user embeddings are picked for ranking.

R-Prec. Rec. Rank
Last pin model 0% 0%
Decay avg. model ( 8% 7%
Sequence models (HierTCN) 21% 16%
PinnerSage (sample embedding) 24% 18%
PinnerSage (Kmeans(k=5)) 32% 24%
PinnerSage (Complete Linkage) 29% 22%
PinnerSage (embedding = Centroid) 37% 28%
PinnerSage (embedding = HierTCN) 37% 28%
PinnerSage (importance ) 31% 24%
PinnerSage (importance ) 30% 24%
PinnerSage (Ward, Medoid, ) 37% 28%
Table 5. Lift relative to last pin model for ranking task.
Figure 7. Diversity relevance tradeoff when different number of embeddings () are selected for candidate retrieval.

5.3.3. Diversity Relevance Tradeoff.

Recommender systems often have to trade between relevance and diversity (Carbonell and Goldstein, 1998). This is particularly true for single embedding models that have limited focus. On the other-hand a multi-embedding model offers flexibility of covering disparate user interests simultaneously. We define diversity to be average pair-wise cosine distance between the recommended set. Figure 7 shows the diversity/relevance lift w.r.t. last pin model. We note that by increasing , we increase both relevance and diversity. This intuitively makes sense as for larger , the recommendation set is composed of relevant pins that span multiple interests of the user. For the relevance gains tapers off as users activities do not vary wildly in a given day (on average). Infact, for , the recommendation diversity achieved by PinnerSage matches closely with action pins diversity, which we consider as a sweet-spot.

6. Related Work

There is an extensive research work focused towards learning embeddings for users and items, for e.g., (Wang et al., 2018b; Covington et al., 2016; Cheng et al., 2016; Wu et al., 2018; You et al., 2019). Much of this work is fueled by the models proposed to learn word representations, such as Word2Vec model (Mikolov et al., 2013) that is a highly scalable continuous bag-of-words (CBOW) and skip-gram (SG) language models. Researchers from several domains have adopted word representation learning models for several problems such as for recommendation candidate ranking in various settings, for example for movie, music, job, pin recommendations (Wang et al., 2018a; Kenthapadi et al., 2017; Barkan and Koenigstein, 2016; You et al., 2019). There are some recent papers that have also focused on candidate retrieval. (Cheng et al., 2016)

mentioned that the candidate retrieval can be handled by a combination of machine-learned models and human-defined rules;

(Wang et al., 2018b) considers large scale candidate generation from billions of users and items, and proposed a solution that pre-computes hundreds of similar items for each embedding offline. (Covington et al., 2016) has discussed a candidate generation and retrieval system in production based on a single user embedding. Several prior works (Baltrunas and Amatriain, 2009; Weston et al., 2013; Liu et al., 2019) consider modeling users with multiple embeddings. (Baltrunas and Amatriain, 2009) uses multiple time-sensitive contextual profile to capture user’s changing interests. (Weston et al., 2013) considers max function based non-linearity in factorization model, equivalently uses multiple vectors to represent a single user, and shows an improvement in 25% improvement in YouTube recommendations. (Liu et al., 2019)

uses polysemous embeddings (embeddings with multiple meanings) to improve node representation, but it relies on an estimation of the occurrence probability of each embedding for inference. Both

(Weston et al., 2013; Liu et al., 2019) report results on offline evaluation datasets. Our work complements prior work and builds upon them to show to operationalize a rich multi-embedding model in a production setting.

7. Conclusion

In this work, we presented an end-to-end system, called PinnerSage, that powers personalized recommendation at Pinterest. In contrast to prior production systems that are based on a single embedding based user representation, PinnerSage proposes a multi-embedding based user representation scheme. Our proposed clustering scheme ensures that we get full insight into the needs of a user and understand them better. To make this happen, we adopt several design choices that allows our system to run efficiently and effectively, such as medoid based cluster representation and importance sampling of medoids. Our offline experiments show that our approach leads to significant relevance gains for the retrieval task, as well as delivers improvement in reciprocal rank for the ranking task. Our large A/B tests show that PinnerSage provides significant real world online gains. Much of the improvements delivered by our model can be attributed to its better understanding of user interests and its quick response to their needs. There are several promising areas that we consider for future work, such as selection of multiple medoids per clusters and a more systematic reward based framework to incorporate implicit feedback in estimating cluster importance.

8. Acknowledgements

We would like to extend our appreciation to Homefeed and Shopping teams for helping in setting up online A/B experiments. Our special thanks to the embedding infrastructure team for powering embedding nearest neighbor search.

References

  • [1] V. Athitsos, M. Potamias, P. Papapetrou, and G. Kollios (2008) Nearest neighbor retrieval using distance-based hashing. In ICDE, Cited by: §4.1.1.
  • [2] A. Babenko and V. Lempitsky (2016) Efficient indexing of billion-scale datasets of deep descriptors. In CVPR, Cited by: §4.1.1.
  • [3] L. Baltrunas and X. Amatriain (2009) Towards time-dependant recommendation based on implicit feedback. In Workshop on context-aware recommender systems, Cited by: §1, §6.
  • [4] O. Barkan and N. Koenigstein (2016) ITEM2VEC: neural item embedding for collaborative filtering. In Workshop on Machine Learning for Signal Processing, Cited by: §6.
  • [5] J. G. Carbonell and J. Goldstein (1998) The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR, Cited by: §5.3.3.
  • [6] H. Cheng, L. Koc, J. Harmsen, T. Shaked, T. Chandra, H. Aradhye, G. Anderson, G. Corrado, W. Chai, M. Ispir, R. Anil, Z. Haque, L. Hong, V. Jain, X. Liu, and H. Shah (2016)

    Wide & deep learning for recommender systems

    .
    In DLRS@RecSys, Cited by: §1, §6.
  • [7] P. Covington, J. Adams, and E. Sargin (2016)

    Deep neural networks for youtube recommendations

    .
    In RecSys, pp. 191–198. Cited by: §1, §2, §2, §6.
  • [8] D. Defays (1977-01) An efficient algorithm for a complete link method. The Computer Journal 20 (4), pp. 364–366. Cited by: §3.1.1.
  • [9] C. Eksombatchai, P. Jindal, J. Z. Liu, Y. Liu, R. Sharma, C. Sugnet, M. Ulrich, and J. Leskovec (2018) Pixie: A system for recommending 3+ billion items to 200+ million users in real-time. In WWW, Cited by: §4.1.
  • [10] A. Epasto and B. Perozzi (2019) Is a single embedding enough? learning node representations that capture multiple social contexts. In WWW, Cited by: §1.
  • [11] M. Grbovic and H. Cheng (2018) Real-time personalization using embeddings for search ranking at airbnb. In KDD, Cited by: §1, §2, §2.
  • [12] J. Johnson, M. Douze, and H. Jégou (2017) Billion-scale similarity search with gpus. CoRR abs/1702.08734. Cited by: §4.1.1.
  • [13] K. Kenthapadi, B. Le, and G. Venkataraman (2017) Personalized job recommendation system at linkedin: practical challenges and lessons learned. In RecSys, Cited by: §6.
  • [14] G. N. Lance and W. T. Williams (1967-02) A general theory of classificatory sorting strategies 1. Hierarchical systems. Computer Journal 9 (4), pp. 373–380. External Links: ISSN 0010-4620 Cited by: §3.1.1.
  • [15] G. Linden, B. Smith, and J. York (2003) Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet computing 7 (1), pp. 76–80. Cited by: §1.
  • [16] N. Liu, Q. Tan, Y. Li, H. Yang, J. Zhou, and X. Hu (2019) Is a single vector enough?: exploring node polysemy for network embedding. In KDD, pp. 932–940. Cited by: §1, §2, §6.
  • [17] Y. A. Malkov and D. A. Yashunin (2018) Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. PAMI. Cited by: §4.1.1.
  • [18] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NIPS, Cited by: §6.
  • [19] S. Okura, Y. Tagami, S. Ono, and A. Tajima (2017) Embedding-based news recommendation for millions of users. In KDD, Cited by: §1.
  • [20] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl (2001) Item-based collaborative filtering recommendation algorithms. In WWW, pp. 285–295. Cited by: §1.
  • [21] K. Terasawa and Y. Tanaka (2007) Spherical lsh for approximate nearest neighbor search on unit hypersphere. In Workshop on Algorithms and Data Structures, Cited by: §4.1.1.
  • [22] D. Wang, S. Deng, X. Zhang, and G. Xu (2018) Learning to embed music and metadata for context-aware music recommendation. World Wide Web 21 (5), pp. 1399–1423. External Links: Link, Document Cited by: §6.
  • [23] J. Wang, P. Huang, H. Zhao, Z. Zhang, B. Zhao, and D. L. Lee (2018) Billion-scale commodity embedding for e-commerce recommendation in alibaba. In KDD, Cited by: §6.
  • [24] Jr. J. H. Ward (1963) Hierarchical grouping to optimize an objective function. Cited by: §3.1.1.
  • [25] J. Weston, R. J. Weiss, and H. Yee (2013) Nonlinear latent factorization by embedding multiple user interests. In RecSys, pp. 65–68. Cited by: §1, §2, §6.
  • [26] L. Y. Wu, A. Fisch, S. Chopra, K. Adams, A. Bordes, and J. Weston (2018) StarSpace: embed all the things!. In AAAI, Cited by: §1, §6.
  • [27] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec (2018)

    Graph convolutional neural networks for web-scale recommender systems

    .
    In KDD, Cited by: §1, §2, §3.
  • [28] J. You, Y. Wang, A. Pal, P. Eksombatchai, C. Rosenberg, and J. Leskovec (2019) Hierarchical temporal convolutional networks for dynamic recommender systems. In WWW, Cited by: §1, §2, §3, §5.3, §6.
  • [29] N. Zhang, S. Deng, Z. Sun, X. Chen, W. Zhang, and H. Chen (2018) Attention-based capsule networks with dynamic routing for relation extraction. arXiv preprint arXiv:1812.11321. Cited by: §1.
  • [30] X. Zhao, R. Louca, D. Hu, and L. Hong (2018) Learning item-interaction embeddings for user recommendations. Cited by: §1.

Reproducibility Supplementary Materials

APPENDIX A: Convergence proof of Ward clustering algorithm

Lemma 8.1 ().

In Algo. 1, a merged cluster cannot have distance lower to another cluster than the lowest distance of its children clusters to that cluster, i.e., .

Proof.

For clusters and to merge, the following two conditions must be met: and . Without loss of generality, let and and , where . We can simplify eq. 1 as follows:

(4)

implies . ∎

Lemma 8.2 ().

A cluster cannot be added twice to the stack in Ward clustering (algo. 1).

Proof by contradiction.

Let the state of stack at a particular time be . Since is added after in stack, this implies that (condition 1). Similarly from subsequent additions to the stack, we get (condition 2) and (condition 3). We also note that by symmetry . Combining condition 2 and 3 leads to , which would contradict condition 1 unless . Since is the second element in the stack after addition of , cannot add given . Hence cannot be added twice to the stack. Additionally, a merger of clusters since the first addition of to the stack, cannot add again. This is because its distance to is greater than or equal to the smallest distance of its child clusters to by Lemma 8.1. Since the child cluster closest to cannot add , so can’t the merged cluster. ∎