From Counter-intuitive Observations to a Fresh Look at Recommender System

10/09/2022
by   Aixin Sun, et al.
0

Recently, a few papers report counter-intuitive observations made from experiments on recommender system (RecSys). One observation is that users who spend more time and users who have many interactions with a recommendation system receive poorer recommendations. Another observation is that models trained by using only the more recent parts of a dataset show significant performance improvement. In this opinion paper, we interpret these counter-intuitive observations from two perspectives. First, the observations are made with respect to the global timeline of user-item interactions. Second, the observations are considered counter-intuitive because they contradict our expectation on a recommender: the more interactions a user has, the higher chance that the recommender better learns the user preference. For the first perspective, we discuss the importance of the global timeline by using the simplest baseline Popularity as a starting point. We answer two questions: (i) why the simplest model popularity is often ill-defined in academic research? and (ii) why the popularity baseline is evaluated in this way? The questions lead to a detailed discussion on the data leakage issue in many offline evaluations. As the result, model accuracies reported in many academic papers are less meaningful and incomparable. For the second perspective, we try to answer two more questions: (i) why models trained by using only the more recent parts of data demonstrate better performance? and (ii) why more interactions from users lead to poorer recommendations? The key to both questions is user preference modeling. We then propose to have a fresh look at RecSys. We discuss how to conduct more practical offline evaluations and possible ways to effectively model user preferences. The discussion and opinions in this paper are on top-N recommendation only, not on rating prediction.

READ FULL TEXT
research
04/12/2022

Recommender May Not Favor Loyal Users

In academic research, recommender systems are often evaluated on benchma...
research
10/21/2020

On Offline Evaluation of Recommender Systems

In academic research, recommender models are often evaluated offline on ...
research
05/28/2020

A Re-visit of the Popularity Baseline in Recommender Systems

Popularity is often included in experimental evaluation to provide a ref...
research
04/15/2023

More Is Less: When Do Recommenders Underperform for Data-rich Users?

Users of recommender systems tend to differ in their level of interactio...
research
01/26/2020

Estimating Error and Bias in Offline Evaluation Results

Offline evaluations of recommender systems attempt to estimate users' sa...
research
07/26/2019

On the Value of Bandit Feedback for Offline Recommender System Evaluation

In academic literature, recommender systems are often evaluated on the t...

Please sign up or login with your details

Forgot password? Click here to reset