Gender and Interest Targeting for Sponsored Post Advertising at Tumblr

by   Mihajlo Grbovic, et al.

As one of the leading platforms for creative content, Tumblr offers advertisers a unique way of creating brand identity. Advertisers can tell their story through images, animation, text, music, video, and more, and promote that content by sponsoring it to appear as an advertisement in the streams of Tumblr users. In this paper we present a framework that enabled one of the key targeted advertising components for Tumblr, specifically gender and interest targeting. We describe the main challenges involved in development of the framework, which include creating the ground truth for training gender prediction models, as well as mapping Tumblr content to an interest taxonomy. For purposes of inferring user interests we propose a novel semi-supervised neural language model for categorization of Tumblr content (i.e., post tags and post keywords). The model was trained on a large-scale data set consisting of 6.8 billion user posts, with very limited amount of categorized keywords, and was shown to have superior performance over the bag-of-words model. We successfully deployed gender and interest targeting capability in Yahoo production systems, delivering inference for users that cover more than 90 daily activities at Tumblr. Online performance results indicate advantages of the proposed approach, where we observed 20 sponsored posts as compared to untargeted campaigns.



There are no comments yet.


page 2

page 3


Large-scale Gender/Age Prediction of Tumblr Users

Tumblr, as a leading content provider and social media, attracts 371 mil...

Towards Data Quality Assessment in Online Advertising

In online advertising, our aim is to match the advertisers with the most...

Comparative Analysis of Content-based Personalized Microblog Recommendations [Experiments and Analysis]

Microblogging platforms constitute a popular means of real-time communic...

Generating Clues for Gender based Occupation De-biasing in Text

Vast availability of text data has enabled widespread training and use o...

Deceptive Deletions for Protecting Withdrawn Posts on Social Platforms

Over-sharing poorly-worded thoughts and personal information is prevalen...

Learning to Expand Audience via Meta Hybrid Experts and Critics for Recommendation and Advertising

In recommender systems and advertising platforms, marketers always want ...

"Don't Downvote A$$$$$$s!!": An Exploration of Reddit's Advice Communities

Advice forums are a crowdsourced way to reinforce cultural norms and mor...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years online social networks have evolved to become an important part of life for online users of all demographic and socio-economic backgrounds. They allow users to easily stay in touch with their friends and family, discuss everyday events, or share their interests with other users with a click of a button. Tumblr is one such social network, representing one of the most popular and fastest growing networks on the web. Hundreds of millions of people around the world come every month to Tumblr to find, follow, and share what they love. The Tumblr network is a gold mine of content, comprising million blogs on different topics such as travel, sports, and music, where million user posts are published on a daily basis. This wealth of user-generated data opens a great opportunity for advertisers, allowing them to promote their products through high-quality targeting campaigns to both blog visitors and blog owners [17].

The standard, prevalent form of advertising on Tumblr is through sponsored posts that appear alongside regular posts in the user’s dashboard, a central page for a Tumblr user, displaying newest posts of followed blogs in form of a stream. This form of advertising, in which advertisements camouflage and co-exists with native content in the stream, is often referred to as native advertising. Native advertisements are usually aesthetically beautiful and highly engaging, which typically makes them more enjoyable than regular display ads [4]. Tumblr launched its native advertising product in May of 2012. Since then, the number of advertisers (or brands) on the platform has grown steadily and reached a milestone of advertisers in April, 2013. Advertising companies on Tumblr include a number of major brands, such as Coca Cola, Converse, and Nestle. Moreover, of most valuable brands are advertising on, while sponsored posts have generated more than billion paid ad impressions since the launch of the Tumblr advertising

However, a huge marketing potential of Tumblr [17] has not been fully exploited, due to the fact that targeting against specific interest and demographics audiences, a targeting component that Tumblr was missing, has become industry standard and many advertisers are in need of such solution.

Figure 1: Examples of blog title (first line, bigger font) and blog description (bottom line, smaller font)

Building interest targeting products on social and microblogging platforms is an important research topic, discussed previously by several researchers [11]

. However, due to its distinct characteristics, Tumblr poses novel challenges encountered during the development phase, which we explain in detail in this paper. In particular, content and language used at Tumblr have distinct characteristics that needed to be accounted for during the modeling (e.g., tag “hp" and word “hp" have different meanings when they appear in the post, “Harry Potter" and “HP company", respectively). Moreover, unlike popular social platform Facebook, which contains large amount of social interactions but limited amount of content, or microblogging platform Twitter, which contain intermediate amount of social interaction and content, Tumblr represents a unique combination of rich and diverse content platform and dynamic social network. To make use of this vast advertising potential, we propose to classify user-generated Tumblr content into a standard multi-level

general-interest taxonomy 333 that advertisers commonly use for defining their targeting campaigns, opening doors to high-quality audience segmentation and modeling for purposes of ad targeting. However, inferring categories of users’ post is a challenging task, given the huge quantities of unlabeled data being posted every day and very limited amount of labeled data, typically obtained by human editorial efforts. To this end, we propose a novel semi-supervised neural language algorithm, capable of jointly learning embeddings of post keywords, post tags and category representations in the same feature space. The neural model was trained on a large-scale data set comprising of billion posts, with only a fraction of categorized content.

Targeting pipelines described in this paper are being used to show ads to millions of users daily, and have substantially improved Tumblr’s business metrics following the launch. On our path to developing targeting capabilities for Tumblr, we first created user profiles, based on users’ Tumblr activities that include publishing blog posts, following other blogs, liking posts, and others. Lastly, we aimed at building and delivering both demographic and interest predictive models based on the created profiles.

We note that the privacy of our users is of critical importance. Therefore, we were constrained on what data we can use. Specifically, user profiles were created solely from data which users share publicly with others, including contents of blog posts, blog title and description, and follow, like and reblog actions. This data is publicly available through Tumblr Firehose data Other user activities, such as users searches on Tumblr, which blogs they visited and where they clicked, are considered to be sensitive data and were not used in any way for the development of ad targeting models.

2 Related Work

Personalization is defined as "the ability to proactively tailor products and product purchasing experiences to tastes of individual consumers based upon their personal and preference information" [7], and it has become an important topic in the recent years. Personalizing online content for individual users may lead to improved user experience and directly translate into financial gains for online businesses [15]. In addition, personalization fosters stronger bond between customers and companies, and can help in increasing user loyalty and retention [2]. For these reasons it has been recognized as a strategic goal and is a focus of significant research efforts of major internet companies [8, 12].

We consider personalization through the domain of ad targeting [9]

, where the task is to find best matching ads to be displayed for each individual user. This improves user’s online experience (as only relevant and interesting ads are shown) and can lead to increased revenue for the advertisers (as users are more likely to click on the ad and make a purchase). Due to its large impact and many open research questions, targeted advertising has garnered significant interest from machine learning community, as witnessed by a large number of recent workshops and publications [5, 11].

One of the basic approaches in ad targeting is to target users with ads based on their demographics, such as age or gender. Historically, this approach has proven to work better than targeting random users. However, while for some products this type of targeting may be sufficient (e.g., women’s makeup, women’s clothing, man’s razors, man’s clothing), for others it is not effective enough and more involved profiling of users is required. A popular method in today’s ad targeting that addresses this issue is known as interest targeting, where ads are assigned categories, such as “sports” or “travel”, and machine learning models are trained to predict user interest in each of these categories using historical user behavior [1, 14, 18]

. Typically, a taxonomy is used to decide on the targeting categories, and for each ad category a separate predictive model is trained, able to estimate the probability of an ad click. Then, the models are evaluated on the entire user population, with

users with the highest score selected for ad exposure. In this work we take this approach to develop a ad targeting platform on Tumblr.

To the best of our knowledge, the Tumblr social network is considered by only a few scientific studies. In [3, 16] the authors discuss the problem of blog recommendation, while in [6] explore Tumblr social norms. However, our work is the first paper that addresses ad targeting at Tumblr.

3 What is Tumblr? is one of the most popular blogging services available online, where users can create and share posts with the followers of their blogs. According to data from January, there are in total million blogs at Tumblr, that jointly produced over billion blog posts. With large number of users signing up every day, Tumblr is currently the fastest growing social platform888

3.1 User activities on Tumblr

To register for a Tumblr account, a valid e-mail address is required, along with a primary username (which will become a part of the blog URL) and a confirmation of age. A Tumblr blog resembles a webpage, with a profile picture, blog title, and blog description appearing on the top (see Figure 1), followed by a stream of blog posts bellow. A first blog created by a registered user is considered his/her primary blog. In addition, a very small portion of users maintains one or more secondary blogs. A Tumblr user is uniquely described by the blog ID of a primary blog, and throughout the paper we will use “blog" and “user” interchangeably.

Figure 2: Distribution of Tumblr post types

Common user activities of users on Tumblr include the following: 1) creating a post on one’s blog; 2) sharing a post created by another blog, called reblogging (a reblogged post will appear on the user’s blog); 3) liking a post by another blog; and 4) following another blog. Similar to Twitter, the follow connections at Tumblr are uni-directional. However, unlike Twitter, users can create longer, richer, and higher quality content in the form of several post types such as text, photo, quote, link, chat, audio, and video. The most popular types of blog posts are photo posts and text posts, and, based on the analysis published in [19], together they cover more than of all posts on Tumblr (see Figure 2). Any post type can be annotated with words starting with that concisely describe the post and allow for easier browsing and searching (called tags). Additional metadata that describes a post include photo captions in photo posts, post titles in text posts, and artists names in audio posts. An example post is shown in Figure 3. Tags, such as #gadgets or #tech, are displayed bellow the photo caption, while the buttons for reblog and like actions are located in the bottom right corner. Lastly, each user has a dashboard (i.e., a feed of blog posts published by followed users, ordered in time), with more recent posts appearing at the top.

Figure 3: Example of Tumblr blog post
Figure 4: Example of Tumblr Sponsored post

3.2 Advertising at Tumblr

Advertising at Tumblr is implemented through the mechanism of sponsored (or promoted) posts shown in a user’s dashboard. This is similar to how advertising works on Twitter and Facebook. A sponsored post can be a video, an image, or simply a textual post containing an advertising message. In Figure 4 we show an example of a sponsored post and how it appears on web and mobile dashboards. Similarly to organic (or non-promoted) posts, sponsored posts can propagate from user to user in the network by means of reblogs, and can also “like" the promoted post. Both likes and reblogs can be seen as an explicit form of acceptance or endorsement of the advertising message. Moreover, just like any other posts, sponsored posts are supplemented with notes on who liked and reblogged the post.

Interestingly, while user-generated, organic posts are reblogged on average times, sponsored posts are reblogged on average times999 We have observed that of engagements with sponsored posts are reblogs, likes, or follows. What is more, every fourth reblog of a sponsored post results in downstream reblogs from followers, leading to content longevity, and one third of reblogs of sponsored posts occur days or more after the initial post.

4 Tumblr data

In this section we describe the data sources (user activities and post contents) utilized to create user profiles for ad targeting. In particular, user activities included actions such as posts, likes, follows, and reblogs, while post contents included tags, title, and body for text posts, artist names from audio posts, as well as tags and caption for photo posts.

4.1 Data sources

Once signed in onto Tumblr, a user can follow other users’ blogs. The follow action is one-directional as it does not require the follow back. For the purpose of this study, we collected a sub-graph which contained million unique nodes (i.e., users), billion edges (i.e., follows), out of which million are bi-directional ( million pairs of users that follow each other). The data set included more than billion activities on Tumblr. As discussed earlier, activity log is available through a data feed called Firehose.

To create user profiles for targeting, textual contents of all posts were collected, including photo captions, tags, title, and body. In addition, every time a user performs post or reblog activity, Firehose lists the user’s blog title and blog description, which we also used to represent a user. As we can see in Figure 1, blog title and description often provide useful information with respect to targeting, such as user’s first name, age, and even declared interests (e.g., statements such as “fashion addict” or “I love football”).

4.2 Keyword extraction

In order to improve the representation of user profiles, we propose to extract relevant keywords from available blog information. It is common that certain words appear together more than some others (e.g., words “credit” and “card"), and we aim to capture those bigrams and use them to represent users in addition to separate words. To detect bigrams we use the procedure that counts the unigram and bigram appearances, and for each combination of words and it calculates the following score:


Finally, in addition to unigrams, bigrams with score above a certain threshold were chosen to be treated as user keywords, and used to generate rich user profiles.

4.3 Keyword-based user profiles

The available data sources were used to extract user profiles. In particular, we extracted three distinct groups of user-related data: 1) declared; 2) content of posts; and 3) actions. The specific components included in each of data groups are listed in Table 1. From each group we extracted relevant keyword to represent users as described below.

Declared Content Actions
blog title post tags reblog
blog description photo captions like
text post title follow
text post body
audio post artists
Table 1: User data extracted from Tumblr Firehose

Declared data consist of information which user provided during sign-up, including keywords from blog title and blog description, where keywords were extracted using method in Section 4.2. To create user profiles we kept the most frequent keywords from blog titles and descriptions, after removing stopwords such as “a”, “the”, “where”, “in”. We counted the keyword frequency in user’s blog title and description, and stored the count and the time stamp of the as a part of the user profile.

Content features were formed from textual contents of posts which user either created or reblogged. The main content feature types included: 1) post tags; 2) keywords from post title and body; 3) keywords from photo post captions, and 4) artist names from audio posts. Tags in posts were not tokenized, instead they were used in the form they appear, e.g., tag “food for a vegan” was one keyword. On the other hand, to extract keywords from text appearing in text post content we again used the method from Section 4.2

. We kept only the most frequently occurring keywords, excluding stopwords. In addition, we used the most popular artist names as keywords. In this way, we collected several millions of distinct keywords that were used to obtain rich representation of user profiles. To illustrate content keyword extraction from our dataset, consider that user

at time stamp used tag #hp five times and tag #nba eight times, keyword football two times in post titles and posted an audio post with song from artist Shakira ten times, then the resulting user profile would be: .

Action features include follows, likes, and reblogs. If a user follows user at time stamp , we created an indicator feature and add it to the ’s user profile. Similarly, if user likes or reblogs user’s post, we created a count feature that keeps record of the number of likes and reblogs , as , , respectively, and update the user profile accordingly. Furthermore, in order to help enrich profiles for users who do not post content but only follow and like other posts, we identified frequent bloggers in each interest category. To identify frequent bloggers for category we ranked users by the the total number of categorized activities and retained

of users with the highest feature value. Then, for each interest category from the taxonomy we created an additional feature in user representation vector that counts how many frequent bloggers from that category a user is following.

The described approach resulted in user profiles for a total of million users. The total number of unique keywords was million, and an average user had non-zero features. Most of the keywords described above are represented as either binary indicators or counts of occurrences. To handle large counts, we normalize the numerical data at a user level by applying log transformation: assuming that the count is , we replace the count by the value .

5 Interest Prediction

The goal of our work is to infer user demographics (described in the following section) and identify user groups with interest in certain topics, such as music, travel, cooking, or books, to allow advertisers to target segmented Tumblr audiences. As the topics may be defined at various levels of granularity, to avoid sparsity problems while still provoding useful and actionable interest categories, user interests are often classified into pre-determined hierarchical interest taxonomy that the advertisers commonly use. However, to be able to create effective user interest classifiers one requires sufficient amount of labeled data. Yet, for the problem of the scale of Tumblr interest prediction this can be a daunting task for human editors. For that reason we propose a novel semi-supervised classification approach, based on recently proposed word2vec model [13], that efficiently and seamlessly makes use of large amounts of unlabeled and limited amount of editorially labeled data for learning effective interest classifiers.

Figure 5: Unsupervised skip-gram model
Figure 6: Semi-supervised skip-gram model

5.1 User interest taxonomy

We decided to classify keywords into General Interest Taxonomy (GIT), used by the Yahoo Gemini advertising platform for native The GIT is carefully derived based on Interactive Advertising Bureau (IAB) taxonomy recommendations in order to meet advertiser needs and protect Yahoo’s interests. The GIT has a two-level hierarchical structure such that advertisers can adjust the audience reach by utilizing broader or narrower interest categories. The top level of the taxonomy contains nodes (e.g., “Automotive", “Business", “Pets", “Travel"), while the second level contains nodes which represent more precise interests (e.g., “Automotive/SUV", “Automotive/Luxury", “Pets/Dogs").

5.2 Proposed semi-supervised classification

In this section we present a novel classification approach based on recently proposed skip-gram model [13], used to categorize keywords into the GIT taxonomy. For conciseness, we describe the proposed model assuming that it is applied to tag categorization. However, we used the same methodology for categorization of keywords originating from blog title, description, and text, audio, and image posts.

We consider the task of tag classification, where the goal is to classify tags into a pre-defined taxonomy of interest categories. In order to address this problem, we propose to learn tag representation in a low-dimensional space using neural language models applied on historical Tumblr posts. Let us assume we are given posts. In post logs found in Firehose, every post is recorded along with the tags , . We collected data in the form , where represents the number of tags in the -th post. Given data set , the objective is to find representation of tags such that semantically similar tags are nearby in the representation space. For this purpose we extend ideas originating from recently proposed language models, as described in the remainder of the section.

The skip-gram (SG) model

involves learning representations of tags in a low-dimensional space from post logs in an unsupervised fashion, by using a notion of a blog post as a “sentence” and the tags within the post as “words”, borrowing the terminology from Natural Language Processing (NLP) domain (see Figure

5). Tag representations using the skip-gram model [13] are learned by maximizing the objective function over the entire set of blog posts, defined as


Probability of observing a neighboring tag given the current tag is defined using the soft-max,


where and are the input and output vector representations of tag of user-specified dimensionality , defines the length of the context for tag sequences, and is the number of unique tags in the vocabulary. From equation (5.2) we see that tags that often co-occur and tags with similar contexts (i.e., with similar neighboring tags) will have similar vector representations as learned by the word2vec model.

The semi-supervised skip-gram (SS-SG) model assumes that some tags are labeled with categories from GIT taxonomy. Then, we introduce a dummy category vector for each node of the taxonomy, and leverage tag contexts in blog posts to jointly learn tag vectors and category vectors in the same feature space. Given such setup, after learning the representations every tag from the vocabulary can be categorized by simply looking up the closest category vector in the joint embedding space.

Tag Category
music Arts and Entertainment/Music
fashion Style and Fashion
song Arts and Entertainment/Music
art Arts and Entertainment
disney Arts and Entertainment/Movies
style Style and Fashion
photography Hobbies/Photography
teen wolf Arts and Entertainment/TV
food Food and Drink
Table 2: Examples of categorized tags
Figure 7: Nearest neighbors of tags: a) #makeup; b) #dress

Specifically, given the labeled tags, we extend to obtain data set

where categories are imputed into post ‘sentences”

, where available. In particular, labeled tags are accompanied by assigned categories, and every time a vector of labeled central tag is updated to predict the surrounding tags, vectors of categories assigned to are updated as well. More formally, assuming central tag is labeled with of categories in total, , the semi-supervised skip-gram learns tag and category representations by maximizing the following objective function,


Probability of observing tag given label of the current tag is defined using the soft-max,


This procedure allowes us to seamlessly incorporate labeled and unlabeled data, and learn tag and category vectors in the joint embedding space. Then, classification of tags amounts to simple nearest-neighbor search among the category vectors. In Figure 6 we show the graphical representation of the semi-supervised skip-gram model.

5.2.1 Training

The models are optimized using stochastic gradient ascent, suitable for large-scale problems. However, computation of gradients in (5.1) is proportional to the vocabulary size , which may be computationally expensive in practical tasks as could easily reach several million tags. As an alternative, we used negative sampling approach [13], which significantly reduces the computational complexity.

Figure 8: Nearest tags to “Food and Drink/Desserts"
Figure 9: Nearest tags to “Health and Fitness/Weight Loss"

Data set used during model training comprised billion posts that contained tags. To collect category labels for some tags, we sorted the tags in the decreasing order of popularity and the editors labeled the top ones with one or more categories. Following this process we obtained categorized tags, which covered of the entire data set. We show examples of categorized tags in Table 2.

The models were trained using a machine with GB of RAM memory and 24 cores. Dimensionality of the embedding space was set to , context neighborhood size was set to . Finally, we used negative samples in each vector update for negative sampling. Similarly to the approach in [13], most frequent tags were sub-sampled during training.

5.2.2 Inference

When the vector representations of all tags are learned, we can find similar tags for a given tag by straightforward -nearest neighbor (-NN) searches in the low-dimensional representation space. We use cosine distance [13] as a measure of similarity. To illustrate usefulness of our approach, examples of similar tags to tag #makeup and #dress are shown in Figure 7, where we see that semantically related tags are grouped in the same part of the embedding space.

Method Precision Recall
LR-SG 0.71 0.65
-NN-SG 0.82 0.62
SS-SG 0.85 0.63
Table 3: Precision and recall of categorization methods

Similarly, we can find the most likely category for any tag by searching for the nearest neighbors in the subset of category vectors. To produce highly confident tags in each category, we calculated cosine distance to each of the tags in the vocabulary and retrieved the tags with cosine distance above . This threshold was found by editorial evaluation of the results. In total, more than tags were confidently categorized into one or more categories. We show examples of categorized tags for categories “Food and Drink/Desserts and Baking" and “Health and Fitness/Weight Loss" in Figures 8 and 9, respectively.

Demonstration video of our tag categorization tool is available online at

User Inferred interest Original keyword-based profile
user 1 Arts and Entertainment/Movies tag:{spoilers:30, shrek:18, hercules:12, cinderella:3, hobbit:123, hulk:21, pokemon:7, thor:58, …
disney:500, tarzan:8, marvel:385, wolverine:21, twilight:2, pixar:87, godzilla:1, x-men:53, …
pocahontas:4, avengers:134}
txt:{aladdin:28, batman:10, bambi:12, movies:100}
desc:{oscar:1, animation:12, comedy:1, movie:1, dvd:1}
user 2 Style and Fashion tag:{ womensfashion:110, curls:6, fashiondiaries:133, redhair:2, menswear:125, chanel:4 …
springfashion:50, style:132, streetstyle:132, hairstylist:134, dapper:3, mensfashion:124}
user 3 Food and Drink tag:{food:11, dessert:4 , soup:1, brunch:1, fruit:2, chicken:3, smoothie:1, cake:2, breakfast:2, …
ginger:2, salad:5, avocado:1}
txt:{food:16, meals:6}
user 4 Home and Garden tag:{daisies:2, kitchen:20, chair:3, art:81, outdoor:20, chandelier:12, lamp:8, window:2, bath:1 …
floral:17, home:3, wildflowers:1, flowers:102, interior:201, tree:1, flower:49, table:1, stairs:2, …
bedroom:56, wood:2, bathroom:26}
txt:{garden:32, interior:17, home:41}
user 5 Automotive/Motorcycles tag:{cars:24, ride:9, vehicle:22, riding:8, road:18}
txt:{bike:8, motorcycle:10, riding:5, ride:9, road:10, vehicle:18, bikes:6, bicycle:2, scooter:1}
Table 4: Examples of interest inference based on enriched user profiles

5.2.3 Evaluation

To quantify the benefits of our approach, we evaluated the method by excluding random tags from the editorially labeled set, and training the model using the remaining

labeled tags. We compared the SS-SG classification to the state-of-the-art logistic regression (LR) and

-NN, trained on the vectors learned by the original SG model. For LR classification (we refer to the method as LR-SG) we trained one classifier per interest category, while in -NN (we refer to the method as -NN-SG) for each test tag we found nearest categorized neighbors and predicted the category that appeared more than times. We report results following 5-fold cross-validation in Table 3. The results indicate that classification based on our approach achieves higher precision than the competing methods, while at the same time maintaining competitive recall measure.

tag "hp" neighbors word "hp" neighbors
harry potter hewlett packard
hp movies
hp books hp computers
hp book quotes hp company
harry potter facts dell computers
hogwarts hp printers
Table 5: Language differences in post tags and post text

5.2.4 Model extensions

To be able to map more of Tumblr content to GIT taxonomy, we trained two more semi-supervised skip-gram models, for: 1) keywords from post title and body and 2) keywords from blog title and description. To train the models, we followed a similar procedure as before. Editors provided categorized keywords, used to form training data sets for SS-SG model learning. Post keyword vectors were trained using a data set comprising billion posts, while blog title and description keyword vectors were trained using million blogs. These models were trained separately because of language differences in these three domains, between used language in post tags, post text, and blog title and description text. To justify our claim, in Table 5 we show nearest neighbors of tag “hp” from tag SS-SG model and word “hp” from post text SS-SG model. As we can see in the table, “hp” has two different meanings in post tags and post text domains, referring to “Harry Potter" and “Hewlett-Packard", respectively.

To find the most confident keywords for each category, we calculated cosine distances between category vectors and all keyword vectors from the vocabulary. We retrieved text blog keywords with cosine distance of or higher. We repeated the same procedure for keywords from blog title and description, resulting in categorized keywords.

5.3 Forming interest segments

The goal of the task of interest prediction is to identify groups of users with interest in certain topics, such as music, travel, cooking, or books, to allow advertisers to target Tumblr audience by interests. In the following we describe a method for predicting user interests used in this study.

In particular, after we obtained user profiles with categorized high-confidence tags and keywords, the interest score for user in the -th category at time was calculated as


where is a set of all activities by user , is the value of the keyword feature (e.g., if the post contains two mentioned of keyword “shakira" then the value is equal to 2, as explained in Section 4.2), and indicator function returns 1 is the keyword extracted from a activity is of class , and 0 otherwise. In addition, we used the time stamp of the activity to exponentially decay less recent activities to account for passing interest (we used in our experiments). Note that the set , in addition to user’s original content, also included posts reblogged by user .

The value of represents an exponentially time-decayed count of all activities in the -th category. Using this approach, we are able to qualify top users in each category by sorting the interest score . Depending on the advertiser’s goals and the category, choice of ranges from campaign to campaign. We note that a single user can be qualified into one or more interest categories (e.g., user can be categorized in “Sport", ”Sport/Basketball", and ”Health and Nutrition/Vitamins" at the same time), and, when the system was deployed, a user was assigned to categories on average. An example of user profiles qualified into certain categories is given in Table 4.

5.3.1 Leveraging the follower graph

To be able to target Tumblr users who do not create much content, but actively follow and engage with other blogs, we leverage the follower graph to create additional categorized features. Using equation 5.5 we can identify frequent bloggers in certain interest categories by focusing on a small percentage of users with maximum . Following and liking posts created by social influencers in the -th category serves as additional evidence of one’s interest in that category.

In each interest category, we label the of users with the highest number of activities in that category, as frequent bloggers. Next, we update the interest score of all users in the -th category, in the following manner.


where is a set of all frequent blogs followed by user , are all engagements with -th blog, i.e. likes or follow actions, along with their weights (e.g., if the posts created by -th blog was liked ten times, then the value is equal to 10; if user followed the -th blog, then the value is equal to 1), and indicator function returns 1 is the blog is of class , and 0 otherwise. Similarly to other activities, we applied the exponential decay to the sum, based on the timestamps of follow and like actions.

We have observed that the additional signals, in form of follow and like engagement with frequent bloggers, increases our segment sizes, making it possible to efficiently target more users.

5.4 Results

To evaluate the generated interest profiles we performed online A/B testing, and worked with several advertisers who ran concurrent interest-targeted and untargeted campaigns. We tracked user engagement with their ads in terms of sponsored post likes, reblogs, and follows, and show the results for targeting campaign in Table 6. We observed an average lift of in user engagement (aggregate of metrics) with sponsored posts as compared to untargeted campaigns. This performance result represents a significant improvement over the baseline approach.

6 Gender Prediction

In this section we explain details of our gender prediction model, based on the user profiles described in previous sections. We first describe the generation process of a golden set of labeled users, used to train a predictive model that generalizes well on the remaining unlabeled users. This is followed by the model description and discussion of results.

6.1 Collecting ground-truth labels

In order to train machine learning method for gender prediction, in addition to user profiles we also require labels that present the ground truth (i.e., “male” or “female”). However, Tumblr does not collect gender information when users sign-up, leaving open a question on how to obtain such data.

Campaign Control Targeted
Home and Garden
Style & Fashion
Sports/Outdoor Sports
Arts & Enter./Television
Arts & Enter./Video Games
Arts & Entertainment 1
Arts & Entertainment 2
Table 6: A/B test results on of user population

To address this problem, we proposed to leverage highly informative blog description data to infer user gender information. In particular, very often users declare their name in the blog description, as illustrated in Figure 1. To extract user declared names, we used several regular expression rules that we found to result in very high precision. The obtained results from a large set of name-matching regular expressions were editorially tested for quality. tt was found that regular expressions reported in Table 7 yielded the most reliable extracted names (valid names were extracted in more than of cases). Then, to generate the gender ground truth, we used the US census data of popular baby from year 1880 to 2013 to created a “name gender” mapping. As some names are given to both males and females, we used the empirical counts of babies with certain name to generate labels. More specifically, we used male/female empirical ratios as soft labels, with indicating 100% confidence in male and indicating 100% in female name.

regex count
my name is * 783,564
my name’s * 291,811
me llamo * 47,663
the name’s * 38,065
mi nombre es * 9,751
mi chiamo * 9,181
mein name ist * 1,025
meu nome e * 512
mon nom est * 215
mio nome e * 185
Table 7: Matching names in blog description (* represents matched name; following methodology described in text, female and male users were found)

6.2 Proposed approach

Let denote our gender data set, where is the total number of labeled users, is a -dimensional user feature vector generated from the user profiles, and is the user label (real-valued number, ranging from to ). The feature vectors were generated from user profiles described in Section 4.2, by disregarding time stamps (due to the fact that, unlike users’ interest, their gender does not fluctuate), and using the keywords as features and overall keyword counts as feature values. Our goal is to learn a gender-predictive model, . As a classification model we used logistic regression, parameterized by a weight vector . We assume that the posterior gender probabilities can be estimated as a linear function of input

, passed through a sigmoidal function,


and . To estimate the parameters

, we minimize the following loss function,


where hyper-parameter controls the -regularization, introduced to induce sparsity in the parameter vector and reduce the feature space to a subset of features that are the most predictive. For data sets with a large number of features, as in our use case, it is common that many features are not useful for producing a desired learning result. For this reason, the -regularization was critical part of our training procedure. In addition, we experimentally observed that the model generalizes better when we trained an initial model with -regularization to find which features have non-zero weight, and then do another round of training without -regularization by only using features with non-zero weights from the first round to learn a better classifier.

Given a trained logistic regression model, the posterior class probabilities are estimated as . Then, the classification predictions are made by thresholding, as , where threshold is set between and to ensure desired precision and recall according to specific advertisers requirements.

6.3 Results

Gender Threshold Precision Recall
female 0.806 0.838
male 0.794 0.689
Table 8: Accuracy of gender model on hold-out set
Class Prediction Correct Wrong Not sure
female 4 298
male 5 127
Table 9: Editorial evaluation of random user predictions

To evaluate the accuracy of our gender prediction framework, we trained a logistic regression model on of golden set and tested on the remaining . We used Vowpal Wabbit [10] implementation on Hadoop to train the model. To illustrate the performance of our gender classifer, performance results in terms of precision and recall measures for threshold are presented in Table 8.

In addition to evaluation on the hold-out set, we also editorially evaluated gender predictions on the unlabeled data set of user profiles. We randomly picked gender predictions from the population of million users and asked editors to visit their profiles and verify the gender. They were instructed to mark our predictions as “correct", “incorrect", or “not sure". The “not sure" grade is to be used when visual inspection of profile is not conclusive, as we found was often the case. The editorial judgment came back with “correct" ( females and males), “incorrect", and “not sure" grades (see Table 9). The fact that there are so many “not sure" grades indicates that in many cases it is hard to infer the gender even after manual efforts, further indicating the benefits of the proposed approach and its superior performance when compared to humans. Finally, we retrained the models with of golden set and deployed it in Yahoo production systems.

Demonstration video of the most predictive tags in each gender group is available online at

7 Deployed system

Due to a rapid growth of Tumblr and large number of activities generated by the existing users, we implemented daily scoring of users in Yahoo production servers. We store the activities in Hive tables121212 for efficient retrieval. The decayed counts used in interest prediction are updated on a daily basis by multiplying the old feature values by the decay factor and adding new activities. To infer gender of new users we implemented daily scoring by leveraging MapReduce on Hadoop131313 Both interest and gender models are retrained on a regular basis.

After thorough editorial evaluation of the inferred gender and interest targeting, both targeting frameworks were enabeled through Gemini self-serve tool141414 Advertisers can choose to use gender and/or interest targeting with custom segment sizes, allowing for effective targeting campaigns.

8 Conclusions

We presented the steps in developing a large-scale Tumblr gender and interest targeting framework, where we used historical Tumblr activities to create rich user profiles. We described the methodology, including a novel semi-supervised neural language model, and the high-level implementation details behind the deployed system. Currently, our gender and interest predictions cover more than % of Tumblr daily activities, and are heavily leveraged by advertisers. In our ongoing work, we are concentrating on creating custom keyword-targeted advertising segments, specifically tailored for a particular advertiser, including addressing problems of keyword discovery and expansion.


  • [1] A. Ahmed, Y. Low, M. Aly, V. Josifovski, and A. J. Smola. Scalable distributed inference of dynamic user interests for behavioral targeting. In KDD, pages 114–122, 2011.
  • [2] J. Alba, J. Lynch, B. Weitz, C. Janiszewski, R. Lutz, A. Sawyer, and S. Wood. Interactive home shopping: consumer, retailer, and manufacturer incentives to participate in electronic marketplaces. The Journal of Marketing, pages 38–53, 1997.
  • [3] N. Barbieri, F. Bonchi, and G. Manco. Who to follow and why: link prediction with explanations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1266–1275. ACM, 2014.
  • [4] F. Bonchi, R. Perego, F. Silvestri, H. Vahabi, and R. Venturini. Efficient query recommendations in the long tail via center-piece subgraphs. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’12, pages 345–354, New York, NY, USA, 2012. ACM.
  • [5] A. Z. Broder. Computational advertising and recommender systems. In Proceedings of the ACM conference on Recommender systems, pages 1–2. ACM, 2008.
  • [6] Y. Chang, L. Tang, Y. Inagaki, and Y. Liu. What is tumblr: A statistical overview and comparison. ACM SIGKDD Explorations Newsletter, 16(1):21–29, 2014.
  • [7] R. K. Chellappa and R. G. Sin. Personalization versus privacy: An empirical examination of the online consumer’s dilemma. Information Technology and Management, 6(2-3):181–202, 2005.
  • [8] A. S. Das, M. Datar, A. Garg, and S. Rajaram. Google news personalization: Scalable online collaborative filtering. In WWW, pages 271–280. ACM, 2007.
  • [9] D. Essex. Matchmaker, matchmaker. Communications of the ACM, 52(5):16–17, 2009.
  • [10] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. The Journal of Machine Learning Research, 10:777–801, 2009.
  • [11] A. Majumder and N. Shrivastava. Know your personalization: Learning topic level personalization in online services. In Proceedings of the 22nd International Conference on World Wide Web, pages 873–884, 2013.
  • [12] U. Manber, A. Patel, and J. Robison. Experience with personalization on Yahoo! Communications of the ACM, 43(8):35, 2000.
  • [13] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119, 2013.
  • [14] S. Pandey, M. Aly, A. Bagherjeiran, A. Hatch, P. Ciccolo, A. Ratnaparkhi, and M. Zinkevich. Learning to target: what works for behavioral targeting. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 1805–1814. ACM, 2011.
  • [15] D. Riecken. Personalized views of personalization. Communications of the ACM, 43(8):27–28, 2000.
  • [16] D. Shin, S. Cetintas, and K.-C. Lee. Recommending tumblr blogs to follow with inductive matrix completion. In RecSys 14 Poster Proceedings, 2014.
  • [17] T. Singh, L. Veron-Jackson, and J. Cullinane. Blogging: A new play in your marketing game plan. Business Horizons, 51(4):281–292, 2008.
  • [18] S. K. Tyler, S. Pandey, E. Gabrilovich, and V. Josifovski. Retrieval models for audience selection in display advertising. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 593–598. ACM, 2011.
  • [19] C. Yi, T. Lei, I. Yoshiyuki, and L. Yan. What is tumblr: A statistical overview and comparison. arXiv:1403.5206v2, 2014.