Demystifying Core Ranking in Pinterest Image Search

03/26/2018 ∙ by Linhong Zhu, et al. ∙ Association for Computing Machinery 0

Pinterest Image Search Engine helps hundreds of millions of users discover interesting content everyday. This motivates us to improve the image search quality by evolving our ranking techniques. In this work, we share how we practically design and deploy various ranking pipelines into Pinterest image search ecosystem. Specifically, we focus on introducing our novel research and study on three aspects: training data, user/image featurization and ranking models. Extensive offline and online studies compared the performance of different models and demonstrated the efficiency and effectiveness of our final launched ranking models.



There are no comments yet.


page 1

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Various researches on learning to rank (Burges et al., 2005; Burges, 2010; Cao et al., 2007; Chapelle and Chang, 2011; Dehghani et al., 2017; Joachims, 2002; Geng et al., 2007; Yin et al., 2016; Liu et al., 2009; Zheng et al., 2008) have been actively studied over the past decades to improve both the relevance of search results and the searchers’ engagement. With the advances of learning to rank technologies, people might have a biased opinion that it is very straightforward to build a ranking component for the image search engine. This is true if we simply want to have a workable solution: in the early days of Pinterest Image Search, we built our first search system on top of Apache Lucene and solr (Smiley and Pugh, 2011; McCandless et al., 2010) (the open-source information retrieval system) and the results were simply ranked by the text relevance scores between queries and text description of images.

However, in Pinterest image search engines, the items users search for are Pins where each of them contains a image, a hyperlink and descriptions, instead of web pages or on-line services. In addition, different user engagement mechanisms also make the Pinterest search process vary from the general web search engines. We therefore have evolved our search ranking over past few years by adding various advancements that addressed the unique challenges in Pinterest Image Search.

(a) Pinteres users can perform various actions towards the results Pins of the query “valentines day nails”.
(b) Close up: Click one pin leads to a zoom-in page. A further click on the “save” button is called “Repin”.
(c) The second click on the close up page in (b) goes to the external website, is named as “click” in Pinterest.
Figure 1. Pinterest Image Search UI on Mobile Apps.

The first challenge rises from an important question: why users search images in Pinterest? As shown in Figure 1, Pinterest users (Pinners) can perform in total 60 actions towards the search results Pins such as “repin”, “click-through”, “close up”, “try it” etc. In addition, users do have different intents while searching in Pinterest (Lo et al., 2016): some users prefer to browse the pins to get inspirations while female users prefer to shop the look in Pinterest or search recipes to cook. On one hand, flexible engagement options help us to understand how users search for images and leverage those signals to provide a better ranking of search results; On another hand, the heterogeneity of engagement actions provides additional challenge about how we should incorporate those explicit feedbacks. In traditional search engine, a clicked result can be explicitly weighed more important than a non-clicked one; while in Pinterest ecosystem, it is very difficult to define a universal preference rule: is a “try it” pin more preferable than a “close up” pin, or vise versa?

Another challenge lies in the nature of image items. Compared to the traditional documents or web pages, the text description of the image is much shorter and noisier. Meanwhile, although we understand that “A picture is worth a thousand words”, it is very difficult to extract reliable visual signals from the image.

Finally, much literature has been published on advanced learning to rank algorithms (see related work section) and their real-life applications in industry. Unfortunately, the best ranking algorithm to use for a given application domain is rarely known. Furthermore, image search engine system has much higher latency requirement than recommendation system such as News Feed, Friend Recommendation etc. Therefore, it is also very important to strike the balance between efficiency and effectiveness of ranking algorithms.

We thus address the aforementioned issues from three aspects:


We propose a simple yet effective way to weighted combine the explicit feedbacks from user engagements into the ground truth labels of engagement training data. The engagement training data, together with human curated relevance judgment data, are fed into our core ranking component in parallel to learn different ranking functions. Finally, a model stacking is performed to combine the engagement-based ranking model with the relevance-based ranking model into the final ranking model.


In order to address the challenge in extracting reliable text and visual signal from pins, advancements in featurization that range from feature engineering, to word embedding and visual embedding, to visual relevance signal, to query intent understanding and user intent understanding etc. In order to better utilize the finding of why pinners use Pinterest to search images, extensive feature engineering and user studies were performed to incorporate explicit feedbacks via different types of engagement into the ranking features of both Pins and queries. Furthermore, the learned intent of users and other dozens of user-level features are utilized in our core machine learned ranking to provide a personalized image search experience for pinners.


We design a cascading core ranking component to achieve the trade-off between search latency and search quality. Our cascading core ranking filters the candidates from millions to thousands using a very lightweight ranking function and subsequently applied a much more powerful full ranking over thousands of pins to achieve a much better quality. For each stage of the cascading core ranking, we perform a detailed study on various ranking models and empirically analyze which model is “better” than another by examining their performances in both query-level and user-level quality metrics.

The remainder of this work is organized as follows. In Section 2, we first introduce how we curated training data from our own search logs and human evaluation platform. The feature representation for users, queries and pins is presented in Section 3. We then introduce a set of ranking models that are experimented in different stages of the cascading ranking and how we ensemble models built from different data sources in Section 4. In Section 5, we present our offline and online experimental study to evaluate the performance of our core ranking in production. Related work is discussed in Section 6. Finally we conclude this work and present future work in Section 7.

2. Engagement and Relevance Data in Pinterest Search

There are several ways to evaluate the quality of search results, including human relevance judgment and user behavioral metrics (e.g., click-through rate, repin rate, close-up rate, abandon rate etc). Therefore, a perfect search system is able to return both high relevant and high user-engaged results. We thus design and develop two relatively independent data generation pipeline: engagement data pipeline and human relevance judgment data pipeline. These two are seamlessly combined into the same learning to rank module. In the following, we share our practical tricks to obtain useful information from engagement and relevance data for learning module.

2.1. Engagement Data

Learning from user behavior was first proposed by Joachims (Joachims, 2002), who presented an empirical evaluation of interpreting click-through evidence. After that, click-through engagement Log has became the standard training data for learning to rank optimization in search engine. Engagement data in Pinterest search engines can be thought of as tuples consisting of the query , the user , the set of pins the user engaged, and the engagement map that records the raw engagement counts of each type of action over pins . Note that here the notation user denotes not only a single user, but a group of users who share the same user feature representation.

However, as introduced earlier in Figure 1, when impression pins are displayed to users, they can perform multiple actions towards pins including click-through, repin, close-up, like, hide, comment, try-it, etc. While different types of actions provide us multiple feedback signals from users, they also bring up a new challenge: how we should simultaneously combine and optimize multiple feedbacks?

One possible solution is that we simply prepare multiple sources of engagement training data, each of which was fed into the ranking function to train a specific model optimizing a certain type of engagement action. For instance, we train a click-based ranking model, a repin-based ranking model, a closeup-based ranking model respectively. Finally, a calibration over multiple models is performed before serving the models to obtain the final display. Unfortunately, we tried and experimented with hundreds of methods for model ensemble and calibration and was unable to successfully obtain a high-quality ranking that does not sacrificing any engagement metric.

Thus, instead of calibrating over the models, we integrate multiple engagement signals over the data level. Let denote the engagement-based quality label of pin to the user and query . To shorten the notation, we simply use to denote when the given query and user can be omitted with ambiguity. We thus generate the engagement-based quality label set of pins as follows.

For each pin with the same keyword query and user features , the raw label is computed as a weighted aggregation of multiple types of actions over all the users with the same features. That is,


where is the set of engagement actions, is the raw engagement count of action and is the weight of a specific action . The weight of each type of action is reversely proportional to the volume of each type of action.

We also normalize the raw label of each pin based on its position in the current ranking and its age to correct the position bias and freshness bias as follows:


where and are the age and position of pin , is the normalized weight for the ages of pins, and is the parameter that controls the position decay.

Another challenge in generating a good quality engagement training data is that we always have a huge stream of negative training samples but very few positive samples that received users’ engagement actions. To avoid over learning from too many negative samples, two pruning strategies are applied:

  1. Prune any query group and its training tuples that does not contain any positive training samples (i.e., , , ).

  2. For each query group, randomly prune negative samples if the number of negative samples is great than a threshold (i.e., ).

With the above simple yet effective ways, an engagement-based data can be automatically extracted from our Pinterest search Logs.

2.2. Human Relevance Data

While the aggregation of large-scale unreliable user search session provides reliable engagement training data with implicit feedback, it also brings up the bias from the current ranking function. For instance, position bias is one of these. To correct the ranking bias, we also curate relevance judgment data from human experts with in-house crowd-sourcing platform. The template for rating how relevant a Pin is to a query is shown in Figure 2. Note that each human expert must be a core Pinterest user and pass the golden-set query quiz before she/he can start relevance judgment in a three-level scale: very relevant, relevant, not relevant. The raw quality label is thus averaged over ratings of all the human experts.

Figure 2. Template for rating how relevant a pin is to a query.

2.3. Combining Engagement with Relevance

Clearly, the range of the raw quality label of the human relevance data differs a lot from that of the engagement data. Figure 3 reports the distribution of quality labels in a set of sampled engagement data and that of human judgment scores in human relevance data after downsampling the negative tuples. Even if we normalize both of them into the same range such as , it is still not an apple-to-apple comparison. Therefore, we simply consider each training data source independently and feed each of which into the ranking function to train a specific model and then perform model ensemble in Section 4.3. This ad-hoc solution performs best in both of our offline and online A/B test evaluation.

(a) Distribution of engagement scores
(b) Distribution of relevance scores
Figure 3. Distribution of quality label across different data sources

3. Feature Representation for Ranking

There are several major groups of features in traditional search engines, which, when taken together, comprise thousands of features (Chapelle and Chang, 2011) (Geng et al., 2007). Here we restrict our discussion to how we enhance traditional ranking features to address unique challenges in Pinterest image search.

3.1. Beyond Text Relevance Feature

As discussed earlier, the text description of each Pin usually is very short and noisy. To address this issue, we build an intensive pipeline that generate high-quality text annotations of each pin in the format of unigrams, bigrams and trigrams. The text annotations of one pin are extracted from different sources such as title, description, texts from the crawled linked web pages, texts extracted from the visual image and automatically classified annotation label. These aggregated annotations are thus utilized to compute the text matching score using BM25 

(Robertson and Zaragoza, 2009) and/or proximity BM25 (Song et al., 2008).

Even with the high quality image annotation, the text signal is still much weaker and noisier than that in the traditional web page search. Therefore, in addition to word-level relevance measurement, a set of intent-based and embedding-based similarity measurement features are developed to enhance the traditional text-based relevancy.


This type of feature tries to go beyond similarity at the word level and compute similarity at the category level. Note that in Pinterest, we have a very precise human curated category taxonomy, which contains 32 L1 categories and 500 L2 categories. Both queries and pins were annotated with categories and their confidences through our multi-label categorizer.


Similar to categoryboost, this type of feature tries to go beyond similarity at the word level and compute similarity at the topic level. However, in contrast to the category, each topic here denotes a distribution of words discovered by the statistical topic modeling such as Latent Dirichlet allocation topic modeling (Blei et al., 2003).

Embedding Features:

The group of embedding features evaluates the similarity between users’ query request and the pins based on their distances on the learned distributed latent representation space. Here both word embedding (Mao et al., 2016) and visual embedding (Jing et al., 2015) (Liu et al., 2017a)

are trained and inferred via different deep neural network architectures on our own Pinterest Image Corpora.

Our enhanced text relevance features play very important roles in our ranking model. For instance, the categoryboost feature was the 15th important feature in organic search ranking model and was ranked as 1st in search ads relevance ranking model.

3.2. User Intent Features

We derive a set of user-intent based features from explicit feedbacks that received from user engagement.


Navboost is our signal into how well a pin performs in general and in context of a specific query and user segment. It is based on the projected close up, click, long-click and repin propensity estimated from previous user engagement. In addition to segmented signal in terms of types of actions, we also derive a family of Navboost signals segmented by country, gender, aggregation time (e.g., 7 days, 90 days, two years etc).


Similarly, in order to increase the coverage, another feature Tokenboost is proposed to evaluate how well a pin performs in general and in context of a specific token.

Gender Features:

Pinterest currently has a majority female user base. To ensure we provide equal quality content to male users, we developed a family of gender features to determine, generally, whether a pin is gender neutral or would resonate with men. We then can rank more gender neutral or male-specific Pins whenever a male user searches. For example, if a man searches shoes, we want to ensure he finds shoes for him, not women’s shoes.

Personalized Features:

As our mission is to help you discover and do what you love, we always put users first and provide as much personalization in results as possible. In order to do this, we rely on not only the demographical information of users, but also various intent-based features such as categories, topics, and embedding of users.

User intent features are one of the most important features for core ranking and they help our learning algorithm learn which type of pins are “really” relevant and interesting to users. For instance, the Navboost feature is able to tell the ranking function that a pin about “travel guides to China ” is much more attractive than a pin about “China Map” (which is ranked 1st in Google Image Search) or “China National Flag” when a user is searching a query “China” in Pinterest.

3.3. Query Intent Features

Similar to traditional web search, we also utilize common query-dependent features such as length, frequency, click-through rate of the query. In addition to those common features, we further develop a set of Pinterest-specific features such as whether the query is male-oriented, the ratio between click-through and repin, the category and other intents of queries, and etc.

3.4. Socialness, Visual and other Features

In addition to the above features, there exists more unique features in Pinterest ecosystem. Since each ranking item is an image, dozens of visual related features are developed ranging from simple image score based on image size, aspect ratio to image hashing features.

Meanwhile, in addition to image search, Pinterest also provide other social products such as image sharing, friends/pin/board following, and cascading image feed recommendation. These products also provide very valuable ranking features such as the socialness, popularity, freshness of a pin or a user etc.

4. Cascading Ranking Models

Pinterest Search handles billions of queries every month and helps hundreds of millions of monthly active users discover useful ideas through high quality Pins. Due to the huge volume of user queries and pins, it is critical to provide a ranking solution that is both effective and efficient. In this section, we provide a deep-dive walk through of our cascading core ranking module.

4.1. Overview of the Cascading Ranking

Figure 4. An illustrative view of cascading ranking

As illustrated in Figure 4, we develop a three-stage cascading ranking module: light-weight stage, full-ranking stage, and re-ranking stage. Note that multi-stage ranking was proposed as early as in NestedRanker (Matveeva et al., 2006) to obtain high accuracy in retrieval. However, only recently motivated by the advances of cascading learning in traditional classification and detection (Raykar et al., 2010), cascading ranking (Liu et al., 2017b) has been re-introduced to improve both the accuracy and the efficiency of ranking systems. Coincidently, the Pinterest Image Search System applied a similar cascading ranking design to that of the Alibaba commerce search engine (Liu et al., 2017b). In the light-weight stage, an efficient model (e.g., linear model) is applied over a set of important but cheaply computed features to filter out negative pins before passing to the full-ranking stage. As shown in Figure 4, light-weight stage ranking successfully filters out millions of pins and restricts the candidate size for full-ranking to thousands scale. In the full-ranking stage, we select a set of more precise and expensive features, together with a complex model, and further following the model ensemble, to provide a high quality ranking. Finally, in the re-ranking stage, several post-processing steps are applied before returning results to the user to improve freshness, diversity, locale- and language-awareness of results.

To ease the presentation, we use , , to denote query, user and pin respectively. denotes the feature representation for a tuple with query , user and pin (see Section 3 for more details). is the observed quality score of pin given query and user , usually is obtained from either the search log or human judgment (see Section 2). is the ground truth quality label of pin given query and user , which is constructed from the observed quality score . Similar to , we use to denote the scoring function that estimates the quality score of pin given query and user . To shorten the notation, we also simply use to denote and to denote when the given query and user can be omitted without ambiguity.

denotes the loss function and

denotes the scoring function.

4.2. Ranking Models

Stage Feature Model Is Pairwise?
Light-weight 8 features Rule-based
RankSVM (Joachims, 2002) Pairwise
Full All features GBDT (Li et al., 2008) (Yin et al., 2016) Pointwise
DNN Pointwise
CNN Pointwise
RankNet (Burges et al., 2005; Burges, 2010) Pairwise
RankSVM (Joachims, 2002) Pairwise
GBRT (Zheng et al., 2007) (Zheng et al., 2008) Pairwise
Re-ranking 6 features Rule-based
GBDT (Li et al., 2008) (Yin et al., 2016) Pointwise
GBRT (Zheng et al., 2007) (Zheng et al., 2008) Pairwise
RankSVM (Joachims, 2002) Pairwise
Table 1. A list of models experimented in different stages of the cascading core ranking.

As shown in Table 1, we experimented a list of representative state-of-the-art models with our own variation of loss functions and architectures in different stages of the cascading core ranking. In the following, we briefly introduce how we adopt each model into our ranking framework. We omitted the details of the Rule-based model since it is applied very intuitively.

Gradient Boost Decision Tree (GBDT)

Given a continuous and differentiable loss function

, Gradient Boost Machine 

(Friedman, 2001) learns an additive classifier that minimizes , where is the learning rate. In the pointwise setting of GBDT, each is a limited depth regression tree (also referred to as a weak learner) added to the current classifier at iteration . The weak learner is selected to minimize the loss function . We use mean square loss as the training loss for the given training instances:


where is number of training instances and the ground truth label is equal to the observed continuous quality label .

Deep Neural Network (DNN) The conceptual architecture of the DNN model is illustrated in Figure 5(a). This architecture models a point-wise ranking model that learns to predict quality score .

Instead of directly learning a scoring function that determines the quality score of pin for query and user given a set of model parameters  (Dehghani et al., 2017), we transform the problem into a multi-class classification problem that classifies each pin into a -scale label [1, 2, 3, 4]. Specifically, during the training phase, we discretize the continuous quality label into the ordinal label and train a multi-class classifier

that predicts the probability of pin

in class .

As shown in Figure 5(a), we use cross entropy loss as the training loss for a single training instance:


where is number of class labels ( = 4 in this setting).

In the inference phase, we treat the trained model as a point-wise scoring function to score each pin for query and user using the following conversion function:

(a) Simple neural network
(b) convolutional neural network
Figure 5. Different ranking architectures

Convolutional Neural Network (CNN) In this model, similar to the previous DNN model, the goal is to learn a multi-class classifier and then convert the predicted probability of into a scoring function using Eq. 5. As it is depicted in Figure 5(b), the architecture contains the

layer of convolutional layer, following the max pooling layer, with the ReLU activator, the

layer of convolutional layer, again following the max pooling layer and the ReLU activator, a fully connected layer and the output layer.

Despite the differences in the architecture, the CNN model uses the same problem formulation, cross entropy loss function, and score conversion function (Eq. 5) as the DNN.

RankNet Burges et. al. (Burges et al., 2005) proposed to learn ranking using a probabilistic cost function based on pairs of examples. Intuitively, the pairwise model tries to learn the correct ordering of pairs of documents in the ranked lists of individual queries. In our setting, one model learns a ranking function which predicts the probability of pin to be ranked higher than given query and user .

Therefore, in the training phase, one important tasks is to extract the preference pair set given query and user . In RankNet, the preference pair set was extracted from the pairs of consecutive training samples in the ranked lists of individual queries. When applying RankNet to our Pinterest search ranking, the preference pair set is constructed based on the raw quality label . For instance, is preferred over if . Note that the preference pair set construction is applied to all the following pairwise models.

Given a preference pair (, ), Burges et. al. (Burges et al., 2005) used the cross entropy as the loss function in RankNet:


where is the ground truth probability of pin ranked higher than .

The model was named as RankNet since Burges et. al. (Burges et al., 2005) used a two-layer Neural Network to optimize the loss function in Eq. 6. The very recent rank model proposed by Dehghani et. al. (Dehghani et al., 2017) can be considered as a variant of RankNet, which used Hinge loss function and a different way of converting the pairwise ranking probability into a scoring function.

RankSVM In the pairwise setting of RankSVM, given the preference pair set , RankSVM (Joachims, 2002) aims to optimize the following problem:


A popular loss function used in practical is the quadratically smoothed hinge loss (Zhang, 2004) such that .

Gradient Boost Ranking Tree (GBRT) Intuitively, one can weigh the GBRT as a combination of RankSVM and GBDT. In the pairwise setting of GBRT, similar to RankSVM, at each iteration the model aims to learn a ranking function that predicts the probability of pin to be ranked higher than given query and user . In addition, similar to the setting of GBDT, here the ranking function is a limited depth regression tree . Again, the decision tree is selected to minimize the loss , where the loss function is defined as:


4.3. Model Ensemble across Different Data Sources

In this section, we discuss how we perform calibration over multiple models that are trained from different data sources (e.g., engagement training data versus human relevance data).

Various ensemble techniques (Dietterich, 2000)

are proposed to decrease variance and bias and improve predictive accuracy such as stacking, cascading, bagging and boosting (

GBDT in Section 4.2 is a popular boosting method). Note that the goal here is not only to improve the quality of ranking using multiple data sources, but also to maintain the low latency of the entire core ranking system. Therefore, we here consider a specific type of ensemble approach stacking with relatively low computational costs.

Stacking first trains several models from different data sources and the final prediction is the linear combination of these models. It introduces a meta-level and uses another model or approach to estimate the weight of each model, i.e., to determine which model performs well given these input data.

Note that stacking can be performed both within the training of each individual model or after the training of each individual model. When stacking is applied after training each individual model, then the final scoring function is defined as


where / is the predicted score of the model from engagement/human relevance judgment data and is the combination coefficient.

Stacking can also be performed within model training. For instance, Zheng et. al. (Zheng et al., 2008) linearly combined the tree model that fits the engagement data and another tree model that fits the human judgment data using the following loss function:


where is the relevance label for pin and controls the contribution of each data source.

Here we chose to perform stacking at different stages based on the complexity of each individual model: stacking is performed in the model training phase if each individual model is relatively easy to compute, and is performed after training each individual model vise versa (e.g., each individual model is a neural network model).

Note that differs from Eq. 10, we always use the same loss function for different data sources. For instance, assume that we aim to train GBRT tree models from both engagement training data and human relevance data, we simply optimize the combined pairwise loss function:


where each / denotes a preference set extracted from engagement /human judgment data respectively, and again controls the contribution of each data source. The advantage of this loss function is that can also be intuitively explained as proportional to number of trees grown from each data source.

5. Experiment

5.1. Offline Experimental Setting

The first group of experiments was conducted off-line on the training data extracted as described in Section 2. For each country and language, we curated 5000 queries and performed human judgment for 400 pins per query. In addition, we built the engagement training data pipeline from randomly extracting recent 7-days 1% search user session Log. The full data set was randomly divided while 70% was used for training, 20% used for testing and 10% used for validation. In total we have 15 millions of training instances.

5.1.1. Feature Statistics

We also analyzed the coverage and distribution of each individual feature. Due to the space limitation, we report the statistics of the top important features from each group in Figure 6.

(a) Text relevance feature
(b) Social feature
(c) Query intent feature
(d) User intent feature
Figure 6. Distribution of selected feature values

5.1.2. Offline Measurement Metrics

In offline setting, we use the query-level Normalized Discounted Cumulative Gain (NDCG (Järvelin and Kekäläinen, 2002)). Given a list of documents and their ground truth labels , the discounted cumulative gain at the position is defined as:


The NDCG is thus defined as:


where is the ideal discounted cumulative gain.

Since we have two different data sources, we derived two measurement metrics: for the human relevance data and for the engagement data.

5.2. Online Experimental Setting

A standard A/B test is conducted online, where users are bucked into different 100 buckets and both the control group and enabled group can use as much as 50 buckets. In this experiment, 5% users in the control group were using the old in production ranking model, while another 5% users in the enabled group were using the experimental ranking model.

The Pinterest image search engine handles in average 2 billion monthly text searches, 600 million monthly visual searches, 70 millions of queries everyday and the query volume could be doubled during the peak periods such as Valentine’s day, Halloween etc. Therefore, roughly 7 millions of queries per day and their search results were evaluated in our online experiments.

5.2.1. Online Measurement Metrics

In online setting, we use a set of both user-level measurement metrics and query-level measurement metrics. For query-level measurement metrics, repin per search (), click per search (), close up per search () and engagement per search () were the main metrics we used. This is because repin, click and close up are the main three types among in total 60 types of actions. The volume of close up action (user clicked on any of the pins to see the zoomed in image and the description of pins) is the dominant since this action is the cheapest. To the contrary, the volume of click action is much lower because click is more expensive to act (As shown in Figure 1, the click means that a user clicked the hyperlinks of the pins and went to the external linked web pages after closing up action).

In the user-level, we use the following measurement metrics:


In order to evaluate the effect of re-ranking in terms of boosting local and fresh content, we also use the following measurement metrics:


where local pins denote that the linked country of pins matches that of users, and fresh pins denote the pins with ages no older than 30 days.

5.3. Performance Results

5.3.1. Lightweight Ranking Comparison

(a) Offline performance
(b) Online performance
Figure 7. Relative performance of RankSVM model to the baseline rule-based method in lightweight ranking stage.
Latency Rule-based RankSVM
¡ 50ms 5% 8%
50 - 200 ms 43% 61%
¿ 200 ms 52% 31%
Table 2. Latency Improvement of RankSVM Lightweight Ranking

The relative performance of RankSVM model to our very earlier rule-based ranking model in lightweight ranking stage is summarized in Figure 7. In offline test data set, the RankSVM model obtained consistent improvement over the rule-based ranking model. However, when moving to the online A/B test experiment, the improvement is smaller. These phenomena are very consistent across all of the ranking experiments: It is much easier to tune a better model than baseline model in offline than online.

Although the quality improvement is relatively subtle, we greatly reduced the search latency when migrating the rule-based ranking to the RankSVM model. With the RankSVM model in the lightweight stage, we have higher confidences in filtering negative pins before passing the candidates into the full ranking stage. This subsequently improves the latency. As shown in Table 2, the percentage of search latency that is smaller than 50 ms is increased from 5% to 8% while the percentage of search latency that is larger than 200 ms is reduced from 52% to 31%.

The results reported in Figure 7 and Table 2 perfectly illustrated how we achieve the balance between search latency and search quality with the lightweight ranking model. The RankSVM model for the lightweight stage was initially launched and serving all the US traffic starting April 2017.

5.3.2. Full Ranking Comparison

In the full ranking stage, we conduct detailed experiments in off line to compare the performance of different models. As shown in Figure 8(a), for the engagement-based quality, overall, CNN GBRT DNN RankNet GBDT, where denotes A performs significantly better than B. In terms of relevance-based quality, CNN {GBRT, DNN, RankNet, GBDT}.

Although Neural Ranking models perform very well in off line, currently our online model serving platform for neural ranking models incurs additional latency. The latency might be ignorable for recommendation-based products but causes bad experiences for searchers in terms of increased pinner waiting time etc. Therefore, we compute the ranking scores of DNN and CNN models in off line and feed these as two additional features into online tree models, denoted as and respectively. The results of online experiment are presented in Figure 8(b). Based on the significant improvement of GBRT over the old linear RankSVM model, we launched the GBRT model in product in October 2017 and will launch the model to serve the entire search traffic soon.

(a) Offline performance
(b) Online performance
Figure 8. Relative performance of different models to the baseline RankSVM method in full ranking stage.

5.3.3. Re-ranking Comparison

Note that the main purposes of the re-ranking is to improve the freshness and localness of results. In the early days, our re-ranking applied a very simple hand-tuned rule-based ranking functions. For example, assume that users prefer to see more fresh content, we then simply give any pin with age younger than 30 days a boost or enforce at least a certain percentage of returned results are fresh.

We spent much effort in feature engineering and migrate the rule-based ranking into machine-learned ranking. With multiple iterations of experiments, as shown in Figure 9, we are able to obtain comparable query-level and user-level performance with the rule-based methods and significantly outperformed the rule-based methods in terms of freshness and localness metrics. The click-through rate and repin rate on fresh pins is increased by 20% when replacing the rule-based re-ranker with the GBRT model.

(a) Offline performance
(b) Online performance
Figure 9. Relative performance of different models to the baseline Rule-based method in re-ranking stage.

6. Related Works

Over the past decades, various ranking methods (Burges et al., 2005; Burges, 2010; Cao et al., 2007; Chapelle and Chang, 2011; Dehghani et al., 2017; Joachims, 2002; Geng et al., 2007; Yin et al., 2016; Liu et al., 2009; Zheng et al., 2008) have been proposed to improving the search relevance of web pages and/or user engagement in traditional search engine and e-commerce search engine. When we refer users to several tutorials (Burges, 2010; Liu et al., 2009) for more detailed introduction regarding the area of learning to rank, we focus on introducing how the applications of learning to rank for image search engine in industry evolves over time.

Prasad et. al. (Prasad et al., 1987)

developed the first microcomputer-based image database retrieval system. After the successful launch of the Google Image Search Product in 2001, various image retrieval systems are deployed for public usage. Earlier works on image retrieval systems 

(Datta et al., 2008) focus on candidate retrieval with the image indexing techniques.

In recent years, many works have been proposed to improve the ranking of the image search results using visual features and personalized features. For instance, Jing et al. (Jing and Baluja, 2008) proposed the visualrank algorithm which ranks the Google image search results based on their centrality in visual similarity graph. On another hand, How to leverage user feedbacks and personalized signals for image ranking were studied in both Yahoo Image Corpora (O’Hare et al., 2016), Flickr Image Corpora (Fan et al., 2009) and Pinterest Image Corpora (Lo et al., 2016). In parallel to industry applications, research about Bayesian personalized ranking (Rendle et al., 2009) has been studied to improve the image search from implicit user feedbacks.

In addition to general image search products, recently many applications have also focused on specific domains such as fashion111,, home decoration333 etc. This trend also motivates researchers to focus on domain-specific image retrieval systems (He and McAuley, 2016; Aizawa and Ogawa, 2015; Liu et al., 2016). In Pinterest, while we have focused on the four verticals: fashion, food, beauty and home decoration, we also aim to help people discover the things they love for any domain.

7. Conclusion and Future Works

We introduced how we leverage user feedback into both training data and featurization to improve our cascading core ranking for Pinterest Image Search Engine. We empirically and theoretically analyzed various ranking models to understand how each of them performs in our image search engine. We hope those practical lessons learned from our ranking module design and deployment could also benefit other image search engines.

In the future, we plan to focus on two directions. First, as we have already observed good performance of both DNN and CNN ranking models, we plan to launch and serve them on-line directly instead of feeding their predicted scores as new features into tree-based ranking models. Second, many of our embedding-based features such as word embedding, visual embedding and user embedding were trained and shared across all the products in Pinterest such as home feed recommendation, advertisement, shopping etc. We plan to obtain the search-specific embedding features to understand the “intents” under the search scenario.

Acknowledgement. Thanks very much to the entire teams of engineers in search feature, search quality and search infra, especially to Chao Tan, Randall Keller, Wenchang Hu, Matthew Fong, Laksh Bhasin, Ying Huang, Zheng Liu, Charlie Luo, Zhongxian Cheng, Xiaofang Cheng, Xin Liu, Yunsong Guo for their much efforts and help in launching the ranking pipeline. Thanks to Jure Leskovec for valuable discussions.


  • (1)
  • Aizawa and Ogawa (2015) Kiyoharu Aizawa and Makoto Ogawa. 2015. Foodlog: Multimedia tool for healthcare applications. IEEE MultiMedia 22, 2 (2015), 4–8.
  • Blei et al. (2003) David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3, Jan (2003), 993–1022.
  • Burges et al. (2005) Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to Rank Using Gradient Descent. In Proceedings of the International Conference on Machine Learning. 89–96.
  • Burges (2010) Christopher JC Burges. 2010. From ranknet to lambdarank to lambdamart: An overview. Learning 11, 23-581 (2010).
  • Cao et al. (2007) Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning. ACM, 129–136.
  • Chapelle and Chang (2011) Olivier Chapelle and Yi Chang. 2011. Yahoo! learning to rank challenge overview. (2011), 1–24.
  • Datta et al. (2008) Ritendra Datta, Dhiraj Joshi, Jia Li, and James Z Wang. 2008. Image retrieval: Ideas, influences, and trends of the new age. Comput. Surveys 40, 2 (2008), 5.
  • Dehghani et al. (2017) Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. 2017. Neural Ranking Models with Weak Supervision. arXiv preprint arXiv:1704.08803 (2017).
  • Dietterich (2000) Thomas G Dietterich. 2000. Ensemble methods in machine learning. In International workshop on multiple classifier systems. Springer, 1–15.
  • Fan et al. (2009) Jianping Fan, Daniel A Keim, Yuli Gao, Hangzai Luo, and Zongmin Li. 2009. JustClick: Personalized image recommendation via exploratory search from large-scale Flickr images. IEEE Transactions on Circuits and Systems for Video Technology 19, 2 (2009), 273–288.
  • Friedman (2001) Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics (2001), 1189–1232.
  • Geng et al. (2007) Xiubo Geng, Tie-Yan Liu, Tao Qin, and Hang Li. 2007. Feature selection for ranking. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 407–414.
  • He and McAuley (2016) Ruining He and Julian McAuley. 2016. VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback. In

    Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence

    . 144–150.
  • Järvelin and Kekäläinen (2002) Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated Gain-based Evaluation of IR Techniques. ACM Trans. Inf. Syst. 20, 4 (2002), 422–446.
  • Jing and Baluja (2008) Yushi Jing and Shumeet Baluja. 2008. VisualRank: Applying PageRank to Large-Scale Image Search. IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (2008), 1877–1890.
  • Jing et al. (2015) Yushi Jing, David Liu, Dmitry Kislyuk, Andrew Zhai, Jiajing Xu, Jeff Donahue, and Sarah Tavel. 2015. Visual search at pinterest. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1889–1898.
  • Joachims (2002) Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 133–142.
  • Li et al. (2008) Ping Li, Qiang Wu, and Christopher J Burges. 2008. Mcrank: Learning to rank using multiple classification and gradient boosting. In Advances in neural information processing systems. 897–904.
  • Liu et al. (2017a) David C Liu, Stephanie Rogers, Raymond Shiau, Dmitry Kislyuk, Kevin C Ma, Zhigang Zhong, Jenny Liu, and Yushi Jing. 2017a. Related Pins at Pinterest: The Evolution of a Real-World Recommender System. In Proceedings of the 26th International Conference on World Wide Web Companion. 583–592.
  • Liu et al. (2017b) Shichen Liu, Fei Xiao, Wenwu Ou, and Luo Si. 2017b. Cascade Ranking for Operational E-commerce Search. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, New York, NY, USA, 1557–1565.
  • Liu et al. (2009) Tie-Yan Liu et al. 2009. Learning to rank for information retrieval. Foundations and Trends® in Information Retrieval 3, 3 (2009), 225–331.
  • Liu et al. (2016) Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. 2016. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    . 1096–1104.
  • Lo et al. (2016) Caroline Lo, Dan Frankowski, and Jure Leskovec. 2016. Understanding behaviors that lead to purchasing: A case study of pinterest. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 531–540.
  • Mao et al. (2016) Junhua Mao, Jiajing Xu, Kevin Jing, and Alan L Yuille. 2016. Training and evaluating multimodal word embeddings with large-scale web annotated images. In Advances in Neural Information Processing Systems. 442–450.
  • Matveeva et al. (2006) Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. 2006. High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 437–444.
  • McCandless et al. (2010) Michael McCandless, Erik Hatcher, and Otis Gospodnetic. 2010. Lucene in Action, Second Edition: Covers Apache Lucene 3.0. Manning Publications Co., Greenwich, CT, USA.
  • O’Hare et al. (2016) Neil O’Hare, Paloma de Juan, Rossano Schifanella, Yunlong He, Dawei Yin, and Yi Chang. 2016. Leveraging User Interaction Signals for Web Image Search. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. 559–568.
  • Prasad et al. (1987) BE Prasad, Amar Gupta, Hoo-Min D Toong, and Stuart E Madnick. 1987. A microcomputer-based image database management system. IEEE Transactions on Industrial Electronics 1 (1987), 83–88.
  • Raykar et al. (2010) Vikas C Raykar, Balaji Krishnapuram, and Shipeng Yu. 2010. Designing efficient cascaded classifiers: tradeoff between accuracy and cost. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 853–860.
  • Rendle et al. (2009) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. 452–461.
  • Robertson and Zaragoza (2009) Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Found. Trends Inf. Retr. 3, 4 (2009).
  • Smiley and Pugh (2011) D. Smiley and D.E. Pugh. 2011. Apache Solr 3 Enterprise Search Server. Packt Publishing, Limited.
  • Song et al. (2008) Ruihua Song, Michael J Taylor, Ji-Rong Wen, Hsiao-Wuen Hon, and Yong Yu. 2008. Viewing term proximity from a different perspective. In European Conference on Information Retrieval. Springer, 346–357.
  • Yin et al. (2016) Dawei Yin, Yuening Hu, Jiliang Tang, Tim Daly, Mianwei Zhou, Hua Ouyang, Jianhui Chen, Changsung Kang, Hongbo Deng, Chikashi Nobata, et al. 2016. Ranking relevance in yahoo search. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 323–332.
  • Zhang (2004) Tong Zhang. 2004.

    Solving large scale linear prediction problems using stochastic gradient descent algorithms. In

    Proceedings of the twenty-first international conference on Machine learning. ACM, 116.
  • Zheng et al. (2007) Zhaohui Zheng, Keke Chen, Gordon Sun, and Hongyuan Zha. 2007. A regression framework for learning ranking functions using relative relevance judgments. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 287–294.
  • Zheng et al. (2008) Zhaohui Zheng, Hongyuan Zha, Tong Zhang, Olivier Chapelle, Keke Chen, and Gordon Sun. 2008. A general boosting method and its application to learning ranking functions for web search. In Advances in neural information processing systems. 1697–1704.