Knowledge-enhanced Black-box Attacks for Recommendations

07/21/2022
by   Jingfan Chen, et al.
0

Recent studies have shown that deep neural networks-based recommender systems are vulnerable to adversarial attacks, where attackers can inject carefully crafted fake user profiles (i.e., a set of items that fake users have interacted with) into a target recommender system to achieve malicious purposes, such as promote or demote a set of target items. Due to the security and privacy concerns, it is more practical to perform adversarial attacks under the black-box setting, where the architecture/parameters and training data of target systems cannot be easily accessed by attackers. However, generating high-quality fake user profiles under black-box setting is rather challenging with limited resources to target systems. To address this challenge, in this work, we introduce a novel strategy by leveraging items' attribute information (i.e., items' knowledge graph), which can be publicly accessible and provide rich auxiliary knowledge to enhance the generation of fake user profiles. More specifically, we propose a knowledge graph-enhanced black-box attacking framework (KGAttack) to effectively learn attacking policies through deep reinforcement learning techniques, in which knowledge graph is seamlessly integrated into hierarchical policy networks to generate fake user profiles for performing adversarial black-box attacks. Comprehensive experiments on various real-world datasets demonstrate the effectiveness of the proposed attacking framework under the black-box setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2020

Attacking Black-box Recommendations via Copying Cross-domain User Profiles

Recently, recommender systems that aim to suggest personalized lists of ...
research
09/11/2018

Poisoning Attacks to Graph-Based Recommender Systems

Recommender system is an important component of many web services to hel...
research
09/01/2021

Black-Box Attacks on Sequential Recommenders via Data-Free Model Extraction

We investigate whether model extraction can be used to "steal" the weigh...
research
09/21/2017

Defining a Lingua Franca to Open the Black Box of a Naïve Bayes Recommender

Many AI systems have a black box nature that makes it difficult to under...
research
06/14/2023

Your Email Address Holds the Key: Understanding the Connection Between Email and Password Security with Deep Learning

In this work, we investigate the effectiveness of deep-learning-based pa...
research
06/27/2023

Shilling Black-box Review-based Recommender Systems through Fake Review Generation

Review-Based Recommender Systems (RBRS) have attracted increasing resear...
research
06/14/2018

Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data

In the past few years, Convolutional Neural Networks (CNNs) have been ac...

Please sign up or login with your details

Forgot password? Click here to reset