Revisiting Adversarially Learned Injection Attacks Against Recommender Systems

08/11/2020
by   Jiaxi Tang, et al.
10

Recommender systems play an important role in modern information and e-commerce applications. While increasing research is dedicated to improving the relevance and diversity of the recommendations, the potential risks of state-of-the-art recommendation models are under-explored, that is, these models could be subject to attacks from malicious third parties, through injecting fake user interactions to achieve their purposes. This paper revisits the adversarially-learned injection attack problem, where the injected fake user `behaviors' are learned locally by the attackers with their own model – one that is potentially different from the model under attack, but shares similar properties to allow attack transfer. We found that most existing works in literature suffer from two major limitations: (1) they do not solve the optimization problem precisely, making the attack less harmful than it could be, (2) they assume perfect knowledge for the attack, causing the lack of understanding for realistic attack capabilities. We demonstrate that the exact solution for generating fake users as an optimization problem could lead to a much larger impact. Our experiments on a real-world dataset reveal important properties of the attack, including attack transferability and its limitations. These findings can inspire useful defensive methods against this possible existing attack.

READ FULL TEXT
research
08/21/2023

Single-User Injection for Invisible Shilling Attack against Recommender Systems

Recommendation systems (RS) are crucial for alleviating the information ...
research
02/19/2020

Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

Recommender system is an essential component of web services to engage u...
research
01/07/2021

Data Poisoning Attacks to Deep Learning Based Recommender Systems

Recommender systems play a crucial role in helping users to find their i...
research
08/21/2019

Assessing the Impact of a User-Item Collaborative Attack on Class of Users

Collaborative Filtering (CF) models lie at the core of most recommendati...
research
01/24/2019

Securing Tag-based recommender systems against profile injection attacks: A comparative study. (Extended Report)

This work addresses the challenges related to attacks on collaborative t...
research
09/21/2018

Adversarial Recommendation: Attack of the Learned Fake Users

Can machine learning models for recommendation be easily fooled? While t...
research
07/22/2021

Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack

To explore the robustness of recommender systems, researchers have propo...

Please sign up or login with your details

Forgot password? Click here to reset