gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling

08/14/2023
by   Aleksandr Petrov, et al.
0

A large catalogue size is one of the central challenges in training recommendation models: a large number of items makes them memory and computationally inefficient to compute scores for all items during training, forcing these models to deploy negative sampling. However, negative sampling increases the proportion of positive interactions in the training data, and therefore models trained with negative sampling tend to overestimate the probabilities of positive interactions a phenomenon we call overconfidence. While the absolute values of the predicted scores or probabilities are not important for the ranking of retrieved recommendations, overconfident models may fail to estimate nuanced differences in the top-ranked items, resulting in degraded performance. In this paper, we show that overconfidence explains why the popular SASRec model underperforms when compared to BERT4Rec. This is contrary to the BERT4Rec authors explanation that the difference in performance is due to the bi-directional attention mechanism. To mitigate overconfidence, we propose a novel Generalised Binary Cross-Entropy Loss function (gBCE) and theoretically prove that it can mitigate overconfidence. We further propose the gSASRec model, an improvement over SASRec that deploys an increased number of negatives and the gBCE loss. We show through detailed experiments on three datasets that gSASRec does not exhibit the overconfidence problem. As a result, gSASRec can outperform BERT4Rec (e.g. +9.47 while requiring less training time (e.g. -73 Moreover, in contrast to BERT4Rec, gSASRec is suitable for large datasets that contain more than 1 million items.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2023

Turning Dross Into Gold Loss: is BERT4Rec really better than SASRec?

Recently sequential recommendations and next-item prediction task has be...
research
01/03/2023

Effective and Efficient Training for Sequential Recommendation Using Cumulative Cross-Entropy Loss

Increasing research interests focus on sequential recommender systems, a...
research
04/30/2018

A Missing Information Loss function for implicit feedback datasets

Latent factor models with implicit feedback typically treat unobserved u...
research
05/19/2020

Addressing Class-Imbalance Problem in Personalized Ranking

Pairwise ranking models have been widely used to address recommendation ...
research
01/22/2023

Debiasing the Cloze Task in Sequential Recommendation with Bidirectional Transformers

Bidirectional Transformer architectures are state-of-the-art sequential ...
research
07/06/2022

Effective and Efficient Training for Sequential Recommendation using Recency Sampling

Many modern sequential recommender systems use deep neural networks, whi...
research
04/05/2022

Positive and Negative Critiquing for VAE-based Recommenders

Providing explanations for recommended items allows users to refine the ...

Please sign up or login with your details

Forgot password? Click here to reset