Reducing offline evaluation bias of collaborative filtering algorithms

06/12/2015
by   Arnaud De Myttenaere, et al.
0

Recommendation systems have been integrated into the majority of large online systems to filter and rank information according to user profiles. It thus influences the way users interact with the system and, as a consequence, bias the evaluation of the performance of a recommendation algorithm computed using historical data (via offline evaluation). This paper presents a new application of a weighted offline evaluation to reduce this bias for collaborative filtering algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/04/2015

Study of a bias in the offline evaluation of a recommendation algorithm

Recommendation systems have been integrated into the majority of large o...
07/03/2014

Reducing Offline Evaluation Bias in Recommendation Systems

Recommendation systems have been integrated into the majority of large o...
12/18/2019

Collaborative Filtering vs. Content-Based Filtering: differences and similarities

Recommendation Systems (SR) suggest items exploring user preferences, he...
09/05/2019

Assessing Fashion Recommendations: A Multifaceted Offline Evaluation Approach

Fashion is a unique domain for developing recommender systems (RS). Pers...
03/31/2010

Unbiased Offline Evaluation of Contextual-bandit-based News Article Recommendation Algorithms

Contextual bandit algorithms have become popular for online recommendati...
09/12/2009

Performing Hybrid Recommendation in Intermodal Transportation-the FTMarket System's Recommendation Module

Diverse recommendation techniques have been already proposed and encapsu...
09/25/2020

Learning Representations of Hierarchical Slates in Collaborative Filtering

We are interested in building collaborative filtering models for recomme...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recommendation systems have been very frequently studied in the literature and aim to provide a user with a set of possibly ranked items that are supposed to match the interests of the user [park2012literature]. Applications of such systems are ubiquitous in the Internet (e-commerce, online advertising, social networks, …), and can be seen as a way to adapt a system to a user.

Obviously, recommendation algorithms must be evaluated before and during their active use in order to ensure their performance. Live monitoring is generally achieved using online performance metrics (e.g. click-through rate of displayed ads) whereas offline evaluation is computed using historical data. Offline evaluation allows to quickly test several strategies without having to wait for real metrics to be collected nor impacting the performance of the online system. One of the main strategies of offline evaluation consists in simulating a recommendation by removing a confirmation action (click, purchase, etc.) from a user profile and testing whether the item associated to this action would have been recommended based on the rest of the profile [shani2011evaluating].

As presented in [li2011unbiased, demytt2014reducing] this scheme ignores various factors that have influenced historical data as the recommendation algorithms previously used, promotional offers on some specific products, etc. Even if limits of evaluation strategies for recommendation algorithms have been identified ([HerlockerEtAl2004Evaluating, mcnee2006being, said2013user]), this protocol is still intensively used in practice.

We study in this paper the general principle of instance weighting proposed in [demytt2014reducing] and show its practical relevance beyond the simple case of constant recommendation (i.e. if recommendations are the same for every user). In addition to its good performances, this method is more realistic than solutions proposed in [HerlockerEtAl2004Evaluating, mcnee2006being] for which a data collection phase based on random recommendations has to be performed. While this phase allows one to build a bias free evaluation data set, it has also adverse effects in terms of e.g. public image or business performance when used on a live system.

The rest of the paper is organized as follows. Section 2 describes in details the setting and the problem. Section 3 introduces the weighting scheme proposed to reduce the evaluation bias. Section 4 demonstrates the practical relevance of our method on real world data extracted from Viadeo (professional social network111See http://corporate.viadeo.com/en/ for more information about Viadeo.).

2 Problem formulation

2.1 Notations and setting

We denote the set of users, the set of items and the historical data available at time . A recommendation algorithm is a function from to some set built from . We will denote the recommendation computed by at instant for user . We assume given a quality function from the product of the result space of and to that measures to what extent an item is correctly recommended by at time via . We denote the items associated to a user .

Offline evaluation is based on the possibility of “removing” any item from a user profile. The result is denoted and is the recommendation obtained at instant when has been removed from the profile of user .

Finally, offline evaluation follows a general scheme in which a user is chosen according to some probability on users

, which might reflect the business importance of the users. Given a user, an item is chosen among the items associated to its profile, according to some conditional probability on items . When an item is not associated to a user (that is ), . A very common choice for is the uniform probability on and it is also very common to use a uniform probability for (other strategy could favor items recently associated to a profile). As the system evolves over the time, and depends on .

The two distributions and

lead to a joint distribution

on .

2.2 Origin of the bias in offline evaluation

The classic offline evaluation procedure consists in calculating the quality of the recommendation algorithm at instant as where the expectation is taken with respect to the joint distribution:

(1)

Then if two algorithms are evaluated at two different moments, their qualities are not directly comparable. Although as in an online system

evolves over time222even if could also evolve over time we do not consider the effects of such evolution in the present article. once a recommendation algorithm is chosen based on a given state of the system, it starts influencing the state of the system when put in production, inducing an increasing distance between its evaluation environment (i.e. the initial state of the system) and the evolving state of the system. This influence is responsible for a bias on offline evaluation as it relies on historical data.

A naive solution to this bias would be to compare algorithms only with respect to the original database at , but it would discard natural evolutions of user profiles.

3 Reducing the evaluation bias

3.1 A suggested method to reduce the bias

A simple transformation of equation (1) shows that for a constant algorithm : . As a consequence, a way to guarantee a stationary evaluation framework for a constant algorithm is to have constant values for the marginal distribution of items, .

A natural solution would be to record those probabilities at and use them as the probability to select an item in offline evaluation at . However, as the selection of users and items leads to a joint distribution, this would require to revert the way offline evaluation is done: first select an item, then select a user having this item with a certain probability leading to a different probability of users selection. Finally this process leads to a similar problem on users, and as in most of systems , it is more efficient to follow the classical evaluation protocol.

Moreover, we will see that the recalibration of every item is not necessary to reduce the main part of the bias. Indeed in practice most of the time a few items concentrate most of the recommendations (very popular items, discount on selected products, …). Thus one can reduce the major part of the bias by optimizing the weight of the items such that the deviation given by have the strongest values. In practice is chosen according to practical constraints (time) or business constraints.

Thus the weighting strategy that we described in [demytt2014reducing] consists in keeping the classical choice for and weighting by departing from the classical values for (such as using a uniform probability) in order to mimic static values for by :

(2)

These weighted conditional probabilities lead to weighted item probabilities defined by:

(3)

Then we minimize the distance between and

by optimizing the Kullback-Leibler divergence, defined by :

where represents the set of items present at . The asymmetric nature of this distance is useful in our context to consider time as a reference. Moreover this asymmetry reduces the influence of rare items at time (as they were not very important in the calculation of ).

3.2 Previous results

As described in [demytt2014reducing], in the classical offline evaluation approach the score of an algorithm in production, given by the classical offline evaluation, tends to increase over time. More generally, the classical offline evaluation tends to overestimate (resp. underestimate) the unbiased score of an algorithm similar (resp. orthogonal) to the one in production.

We have also shown in [demytt2014reducing] that the suggested weighting strategy perfectly recalibrates the score obtained by the classical offline evaluation for constant algorithms and high values of . Thus, this method seems to reduce the bias for the very simple class of constant algorithms.

In the next part we apply this method to collaborative filtering algorithms.

4 Experimentations on a collaborative filtering

4.1 Data and metrics

We consider real world data extracted from Viadeo, where skills are attached to user’s profile. The objective of the recommendation systems consists in suggesting new skills to users. The dataset contains 18294 users and 180 items (skills), leading to 117376 couples .

Both probabilities and are uniform, and the quality function is given by where is a set of 5 items. The quality of a recommendation algorithm,

, is estimated via stochastic sampling in order to simulate what could be done on a larger data set than the one used for this illustration. We selected repeatedly 20 000 couples (user, item) (first we select a user

uniformly, then an item according to ).

4.2 Collaborative filtering algorithms

Let

be the vector of items of user

at time (). Then is a sparse vector as most of users are associated to only a few items. The objective of collaborative filtering algorithms is to estimate for using the information known on other users. In this paper we will present two different collaborative filtering algorithms:

The equation

is known as collaborative filtering with cosine similarity, whereas the equation

computes the proportion of users associated to item among the one associated to items possessed by . Then we will note naive CF (Collaborative Filtering) the algorithm .

Finally, the recommendation strategy consists in recommending the items having the highest values in .

4.3 Results

We apply the method described in Section 3 to compute optimal weights at different instants and for several values of the parameter . The collaborative filtering algorithms are the one presented in section 4.2. Results are summarized in figure 1.

(a) cosine similarity
(b) naive CF
Figure 1: Results on the collaborative filtering with cosine similarity and naive CF, respectively defined by equation and in section 4.2, for several values of (the number of weights optimized).

The analysis is conducted on a 201 days period, from day 300 to day 500, where day 0 corresponds to the launch date of the skill feature. It is important to notice that two recommendation campaigns were conducted by Viadeo during this period at and respectively. As we can see on figure 1, the scores strongly decrease after the first recommendation campaign (). Thus those campaigns have strongly biased the collected data, leading to a significant bias in the offline evaluation score.

The figure 1 shows the influence of the value of : the higher is the more weights are optimized and the more the bias is corrected. However, the efficiency of the recalibration depends on the algorithms. The results show that the weighting protocol permits to reduce the impact of recommendation campaigns on offline evaluation results as intended. However it does not lead to the stationarity of the score of collaborative filtering algorithms (while it leads to constant scores for constant algorithms). This can be explained by the nature of collaborative filtering: we cannot expect the score to be constant for such an algorithm as it depends on the correlation between users, which have been modified by the recommendation campaigns.

5 Conclusion

Various factors influence historical data and bias the score obtained by classical offline evaluation strategy. Indeed, as recommendations influence users, a recommendation algorithm in production tends to be favored by offline evaluation.

We have presented a new application of the item weighting strategy inspired by techniques designed for tackling the covariate shift problem. Whereas our previous results presented the efficiency of this method for constant algorithms, we have shown that this method also reduces the bias of more elaborate algorithms.

However the efficiency of this approach depends on algorithms as a recommendation campaign also introduces bias in the correlation between users. Thus the presented strategy reduces a part of the bias, and future works will focus on the structural bias introduced by recommendation campaigns.

References