PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion

10/21/2021
by   Shijie Zhang, et al.
0

Due to the growing privacy concerns, decentralization emerges rapidly in personalized services, especially recommendation. Also, recent studies have shown that centralized models are vulnerable to poisoning attacks, compromising their integrity. In the context of recommender systems, a typical goal of such poisoning attacks is to promote the adversary's target items by interfering with the training dataset and/or process. Hence, a common practice is to subsume recommender systems under the decentralized federated learning paradigm, which enables all user devices to collaboratively learn a global recommender while retaining all the sensitive data locally. Without exposing the full knowledge of the recommender and entire dataset to end-users, such federated recommendation is widely regarded `safe' towards poisoning attacks. In this paper, we present a systematic approach to backdooring federated recommender systems for targeted item promotion. The core tactic is to take advantage of the inherent popularity bias that commonly exists in data-driven recommenders. As popular items are more likely to appear in the recommendation list, our innovatively designed attack model enables the target item to have the characteristics of popular items in the embedding space. Then, by uploading carefully crafted gradients via a small number of malicious users during the model update, we can effectively increase the exposure rate of a target (unpopular) item in the resulted federated recommender. Evaluations on two real-world datasets show that 1) our attack model significantly boosts the exposure rate of the target item in a stealthy way, without harming the accuracy of the poisoned recommender; and 2) existing defenses are not effective enough, highlighting the need for new defenses against our local model poisoning attacks to federated recommender systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2022

Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios

Various attack methods against recommender systems have been proposed in...
research
06/02/2020

Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start

E-commerce platforms provide their customers with ranked lists of recomm...
research
01/26/2023

Interaction-level Membership Inference Attack Against Federated Recommender Systems

The marriage of federated learning and recommender system (FedRec) has b...
research
12/11/2022

Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense

Federated recommendation (FedRec) can train personalized recommenders wi...
research
07/24/2023

HeteFedRec: Federated Recommender Systems with Model Heterogeneity

Owing to the nature of privacy protection, federated recommender systems...
research
03/04/2022

Targeted Data Poisoning Attack on News Recommendation System by Content Perturbation

News Recommendation System(NRS) has become a fundamental technology to m...
research
04/01/2022

FedRecAttack: Model Poisoning Attack to Federated Recommendation

Federated Recommendation (FR) has received considerable popularity and a...

Please sign up or login with your details

Forgot password? Click here to reset