Adversarial Recommendation: Attack of the Learned Fake Users

Can machine learning models for recommendation be easily fooled? While the question has been answered for hand-engineered fake user profiles, it has not been explored for machine learned adversarial attacks. This paper attempts to close this gap. We propose a framework for generating fake user profiles which, when incorporated in the training of a recommendation system, can achieve an adversarial intent, while remaining indistinguishable from real user profiles. We formulate this procedure as a repeated general-sum game between two players: an oblivious recommendation system R and an adversarial fake user generator A with two goals: (G1) the rating distribution of the fake users needs to be close to the real users, and (G2) some objective f_A encoding the attack intent, such as targeting the top-K recommendation quality of R for a subset of users, needs to be optimized. We propose a learning framework to achieve both goals, and offer extensive experiments considering multiple types of attacks highlighting the vulnerability of recommendation systems.

READ FULL TEXT
research
08/21/2023

Single-User Injection for Invisible Shilling Attack against Recommender Systems

Recommendation systems (RS) are crucial for alleviating the information ...
research
11/08/2022

How Fraudster Detection Contributes to Robust Recommendation

The adversarial robustness of recommendation systems under node injectio...
research
06/23/2022

Shilling Black-box Recommender Systems by Learning to Generate Fake User Profiles

Due to the pivotal role of Recommender Systems (RS) in guiding customers...
research
08/18/2019

Detection of Shilling Attack Based on T-distribution on the Dynamic Time Intervals in Recommendation Systems

With the development of information technology and the Internet, recomme...
research
08/11/2020

Revisiting Adversarially Learned Injection Attacks Against Recommender Systems

Recommender systems play an important role in modern information and e-c...
research
02/14/2023

Practical Cross-System Shilling Attacks with Limited Access to Data

In shilling attacks, an adversarial party injects a few fake user profil...
research
11/30/2021

Mitigating Adversarial Attacks by Distributing Different Copies to Different Users

Machine learning models are vulnerable to adversarial attacks. In this p...

Please sign up or login with your details

Forgot password? Click here to reset