Shilling Black-box Recommender Systems by Learning to Generate Fake User Profiles

06/23/2022
by   Chen Lin, et al.
10

Due to the pivotal role of Recommender Systems (RS) in guiding customers towards the purchase, there is a natural motivation for unscrupulous parties to spoof RS for profits. In this paper, we study Shilling Attack where an adversarial party injects a number of fake user profiles for improper purposes. Conventional Shilling Attack approaches lack attack transferability (i.e., attacks are not effective on some victim RS models) and/or attack invisibility (i.e., injected profiles can be easily detected). To overcome these issues, we present Leg-UP, a novel attack model based on the Generative Adversarial Network. Leg-UP learns user behavior patterns from real users in the sampled “templates” and constructs fake user profiles. To simulate real users, the generator in Leg-UP directly outputs discrete ratings. To enhance attack transferability, the parameters of the generator are optimized by maximizing the attack performance on a surrogate RS model. To improve attack invisibility, Leg-UP adopts a discriminator to guide the generator to generate undetectable fake user profiles. Experiments on benchmarks have shown that Leg-UP exceeds state-of-the-art Shilling Attack methods on a wide range of victim RS models. The source code of our work is available at: https://github.com/XMUDM/ShillingAttack.

READ FULL TEXT

page 3

page 5

page 6

page 7

page 8

page 9

page 12

page 14

research
05/17/2020

Attacking Recommender Systems with Augmented User Profiles

Recommendation Systems (RS) have become an essential part of many online...
research
08/21/2023

Single-User Injection for Invisible Shilling Attack against Recommender Systems

Recommendation systems (RS) are crucial for alleviating the information ...
research
02/14/2023

Practical Cross-System Shilling Attacks with Limited Access to Data

In shilling attacks, an adversarial party injects a few fake user profil...
research
07/22/2021

Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack

To explore the robustness of recommender systems, researchers have propo...
research
09/21/2018

Adversarial Recommendation: Attack of the Learned Fake Users

Can machine learning models for recommendation be easily fooled? While t...
research
06/23/2020

Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks

A large body of research has focused on adversarial attacks which requir...
research
11/05/2020

A Black-Box Attack Model for Visually-Aware Recommender Systems

Due to the advances in deep learning, visually-aware recommender systems...

Please sign up or login with your details

Forgot password? Click here to reset