Multi-Step Adversarial Perturbations on Recommender Systems Embeddings

10/03/2020
by   Vito Walter Anelli, et al.
0

Recommender systems (RSs) have attained exceptional performance in learning users' preferences and helping them in finding the most suitable products. Recent advances in adversarial machine learning (AML) in the computer vision domain have raised interests in the security of state-of-the-art model-based recommenders. Recently, worrying deterioration of recommendation accuracy has been acknowledged on several state-of-the-art model-based recommenders (e.g., BPR-MF) when machine-learned adversarial perturbations contaminate model parameters. However, while the single-step fast gradient sign method (FGSM) is the most explored perturbation strategy, multi-step (iterative) perturbation strategies, that demonstrated higher efficacy in the computer vision domain, have been highly under-researched in recommendation tasks. In this work, inspired by the basic iterative method (BIM) and the projected gradient descent (PGD) strategies proposed in the CV domain, we adapt the multi-step strategies for the item recommendation task to study the possible weaknesses of embedding-based recommender models under minimal adversarial perturbations. Letting the magnitude of the perturbation be fixed, we illustrate the highest efficacy of the multi-step perturbation compared to the single-step one with extensive empirical evaluation on two widely adopted recommender datasets. Furthermore, we study the impact of structural dataset characteristics, i.e., sparsity, density, and size, on the performance degradation issued by presented perturbations to support RS designer in interpreting recommendation performance variation due to minimal variations of model parameters. Our implementation and datasets are available at https://anonymous.4open.science/r/9f27f909-93d5-4016-b01c-8976b8c14bc5/.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2018

Adversarial Personalized Ranking for Recommendation

Item recommendation is a personalized ranking task. To this end, many re...
research
01/29/2022

Robustness of Deep Recommendation Systems to Untargeted Interaction Perturbations

While deep learning-based sequential recommender systems are widely used...
research
08/04/2020

Can Adversarial Weight Perturbations Inject Neural Backdoors?

Adversarial machine learning has exposed several security hazards of neu...
research
10/28/2021

From Intrinsic to Counterfactual: On the Explainability of Contextualized Recommender Systems

With the prevalence of deep learning based embedding approaches, recomme...
research
05/20/2020

Adversarial Machine Learning in Recommender Systems: State of the art and Challenges

Latent-factor models (LFM) based on collaborative filtering (CF), such a...
research
07/29/2021

Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality

Recommender systems (RSs) employ user-item feedback, e.g., ratings, to m...
research
09/27/2019

Maximal adversarial perturbations for obfuscation: Hiding certain attributes while preserving rest

In this paper we investigate the usage of adversarial perturbations for ...

Please sign up or login with your details

Forgot password? Click here to reset