A Missing Information Loss function for implicit feedback datasets

04/30/2018
by   Juan Arévalo, et al.
0

Latent factor models with implicit feedback typically treat unobserved user-item interactions (i.e. missing information) as negative feedback. This is frequently done either through negative sampling (point-wise loss) or with a ranking loss function (pair- or list-wise estimation). Since a zero preference recommendation is a valid solution for most common objective functions, regarding unknown values as actual zeros results in users having a zero preference recommendation for most of the available items. In this paper we propose a novel objective function, the Missing Information Loss (MIL) function, that explicitly forbids treating unobserved user-item interactions as positive or negative feedback. We apply this loss to a user--based Denoising Autoencoder and compare it with other known objective functions such as cross-entropy (both point-- and pair--wise) or the recently proposed multinomial log-likelihood. The MIL function achieves best results in ranking-aware metrics when applied to the Movielens-20M and Netflix datasets, slightly above those obtained with cross-entropy in point-wise estimation. Furthermore, such a competitive performance is obtained while recommending popular items less frequently, a valuable feature for Recommender Systems with a large catalogue of products.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2021

Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback based Recommendation

As users often express their preferences with binary behavior data (impl...
research
04/14/2022

Self-Guided Learning to Denoise for Robust Recommendation

The ubiquity of implicit feedback makes them the default choice to build...
research
08/23/2023

Learning from Negative User Feedback and Measuring Responsiveness for Sequential Recommenders

Sequential recommenders have been widely used in industry due to their s...
research
08/14/2023

gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling

A large catalogue size is one of the central challenges in training reco...
research
09/14/2023

Turning Dross Into Gold Loss: is BERT4Rec really better than SASRec?

Recently sequential recommendations and next-item prediction task has be...
research
12/08/2020

Split: Inferring Unobserved Event Probabilities for Disentangling Brand-Customer Interactions

Often, data contains only composite events composed of multiple events, ...
research
06/07/2020

Denoising Implicit Feedback for Recommendation

The ubiquity of implicit feedback makes them the default choice to build...

Please sign up or login with your details

Forgot password? Click here to reset