DeepAI AI Chat
Log In Sign Up

Multi-Step Adversarial Perturbations on Recommender Systems Embeddings

by   Vito Walter Anelli, et al.

Recommender systems (RSs) have attained exceptional performance in learning users' preferences and helping them in finding the most suitable products. Recent advances in adversarial machine learning (AML) in the computer vision domain have raised interests in the security of state-of-the-art model-based recommenders. Recently, worrying deterioration of recommendation accuracy has been acknowledged on several state-of-the-art model-based recommenders (e.g., BPR-MF) when machine-learned adversarial perturbations contaminate model parameters. However, while the single-step fast gradient sign method (FGSM) is the most explored perturbation strategy, multi-step (iterative) perturbation strategies, that demonstrated higher efficacy in the computer vision domain, have been highly under-researched in recommendation tasks. In this work, inspired by the basic iterative method (BIM) and the projected gradient descent (PGD) strategies proposed in the CV domain, we adapt the multi-step strategies for the item recommendation task to study the possible weaknesses of embedding-based recommender models under minimal adversarial perturbations. Letting the magnitude of the perturbation be fixed, we illustrate the highest efficacy of the multi-step perturbation compared to the single-step one with extensive empirical evaluation on two widely adopted recommender datasets. Furthermore, we study the impact of structural dataset characteristics, i.e., sparsity, density, and size, on the performance degradation issued by presented perturbations to support RS designer in interpreting recommendation performance variation due to minimal variations of model parameters. Our implementation and datasets are available at


page 1

page 2

page 3

page 4


Adversarial Personalized Ranking for Recommendation

Item recommendation is a personalized ranking task. To this end, many re...

Robustness of Deep Recommendation Systems to Untargeted Interaction Perturbations

While deep learning-based sequential recommender systems are widely used...

Can Adversarial Weight Perturbations Inject Neural Backdoors?

Adversarial machine learning has exposed several security hazards of neu...

From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems

With the prevalence of deep learning based embedding approaches, recomme...

Adversarial Machine Learning in Recommender Systems: State of the art and Challenges

Latent-factor models (LFM) based on collaborative filtering (CF), such a...

Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality

Recommender systems (RSs) employ user-item feedback, e.g., ratings, to m...

Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest

In this paper we investigate the usage of adversarial perturbations for ...