Attacking Black-box Recommendations via Copying Cross-domain User Profiles

05/17/2020 ∙ by Wenqi Fan, et al. ∙ City University of Hong Kong Michigan State University 0

Recently, recommender systems that aim to suggest personalized lists of items for users to interact with online have drawn a lot of attention. In fact, many of these state-of-the-art techniques have been deep learning based. Recent studies have shown that these deep learning models (in particular for recommendation systems) are vulnerable to attacks, such as data poisoning, which generates users to promote a selected set of items. However, more recently, defense strategies have been developed to detect these generated users with fake profiles. Thus, advanced injection attacks of creating more `realistic' user profiles to promote a set of items is still a key challenge in the domain of deep learning based recommender systems. In this work, we present our framework CopyAttack, which is a reinforcement learning based black-box attack method that harnesses real users from a source domain by copying their profiles into the target domain with the goal of promoting a subset of items. CopyAttack is constructed to both efficiently and effectively learn policy gradient networks that first select, and then further refine/craft, user profiles from the source domain to ultimately copy into the target domain. CopyAttack's goal is to maximize the hit ratio of the targeted items in the Top-k recommendation list of the users in the target domain. We have conducted experiments on two real-world datasets and have empirically verified the effectiveness of our proposed framework and furthermore performed a thorough model analysis.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Figure 1. Two domains share some movies. The profile of user in the source domain is copied into the target domain for attacking the target item .

Recommender systems aim to suggest a personalized list of items that users are likely to interact with (e.g., click or purchase) in online worlds, especially in many user-oriented online services such as E-commerce (e.g., Amazon and Taobao), and Social Media sites (e.g., Facebook and Twitter). Recent years have witnessed increasing efforts in adopting deep learning techniques such as RNNs and GNNs for recommendations (Wang et al., 2019)

. These deep learning based recommender systems have achieved the state-of-the-art performance. However, it is well known that deep neural networks (DNNs) are highly vulnerable to adversarial attacks 

(Goodfellow et al., 2014; Dai et al., 2018; Zügner et al., 2018) where adversaries tend to manipulate the data for degrading the prediction performance. Recent studies have demonstrated that the DNNs based recommender systems are also vulnerable to adversarial attacks (Yang et al., 2017; Christakopoulou and Banerjee, 2019) where adversaries intend to manipulate users’ decisions for their desires. One of the most popular ways to attack recommender systems is data poisoning attacks (also called as shilling attacks) (Li et al., 2016; Christakopoulou and Banerjee, 2019; Chen et al., 2018; Fang et al., 2018; Yang et al., 2017). In these attacks, adversaries generate users in a recommender system with well-designed profiles to promote a carefully chosen subset of items (Li et al., 2016; Christakopoulou and Banerjee, 2019; Lam and Riedl, 2004). However, recent defense studies (Zhang et al., 2015; Cai and Zhang, 2019; Chen et al., 2018; Wu et al., 2012) have demonstrated that these fake profile users are easy to be detected since they present very different patterns from real profiles. Thus, how to inject users with profiles similar to real ones is still a key challenge to attack the DNNs based recommender systems.

Some real-world recommendation platforms have similar functionalities and as a consequence, they have a lot of information in common. For example, movie recommendation platforms IMDb and Netflix share a lot of movies and e-commerce sites Amazon and eBay have millions of products in common. Moreover, users from these platforms with similar functionalities also share similar behavior patterns/preferences. In fact, these observations have encouraged a large body of work targeted on leveraging information from one platform to help recommendations in the other platform that is well known as cross-domain recommendations (Cantador et al., 2015). Recall that the key obstacle to attack recommender systems is how to generate users with profiles close to real ones. To tackle this challenge, in this work, we change our perspective – instead of generating users with fake profiles, we propose to copy cross-domain users with real profiles. One illustrative example is shown in Figure 1, where we have a target domain and a source domain for movie recommendations. These two domains share a set of movies. To attack/promote the targeted item in target domain , user ’s profile in the source domain can be copied into the target domain as a new user , such that the movie is attacked.

In this paper, we aim to attack black-box recommendations via copying cross-domain user profiles. The copied user profiles are naturally real. However, how to select user profiles in the source domain under the black-box setting faces tremendous challenges since in the black-box setting, we only have the query access to the target model and each query feedback consists of Top- recommended items for specific users. Moreover, the majority of existing attack methods have been designed under the white-box setting, in which the attacker requires to have full knowledge of the target model (e.g., model design and parameters) and dataset (Li et al., 2016; Welling and Teh, 2011; Christakopoulou and Banerjee, 2019). Existing white-box approaches such as these based on Projected Gradient Method and Stochastic Gradient Langevin Dynamics (Li et al., 2016; Christakopoulou and Banerjee, 2019) are not applicable to our problem. Therefore, we propose a reinforcement learning (RL) based attack method that learns to choose user profiles in the source domain with only query feedback from the target recommender systems. Our major contributions are summarized as follows:

  • We introduce a novel strategy to obtain real user profiles by copying cross-domain user profiles to attack the target recommender systems;

  • We propose a novel framework (CopyAttack) to attack recommendations under the black-box setting via reinforcement learning, which can effectively and efficiently select cross-domain user profiles to perform effective attacks; and

  • We conduct comprehensive experiments on two real-world datasets to demonstrate the effectiveness of the proposed attacking framework.

The remainder of this paper is organized as follows. In Section 3 we introduce the problem definition. Thereafter we introduce the proposed framework in Section 4. In Section 5, we conduct experiments on two real-world datasets to illustrate the effectiveness of the proposed method. In Section 2, we review related work. Finally, we conclude our work with future directions in Section 6.

2. Related Work

Recommender systems aim to recommend potential items to specific users. Attacking recommender systems can influence users’ beliefs and decisions with malicious purposes (Christakopoulou and Banerjee, 2019; Lam and Riedl, 2004; Chen et al., 2018). Some methods are proposed to study this directions. More specifically,  (Li et al., 2016) apply Projected Gradient Method and Stochastic Gradient Langevin Dynamics (SGLD) (Welling and Teh, 2011) to optimize data poisoning attack model with full knowledge of factorization-based collaborative filtering.  (Christakopoulou and Banerjee, 2019) introduces two steps adversarial framework for recommendations, in which they first generate fake users through Generative Adversarial Networks (GAN), and then apply Projected Gradient Method for further crafting fake user profiles with a suitable adversarial intent.  (Yu et al., 2017) proposed hybrid attacks, which elaborate fake user profiles via fusing ratings information and social relationships for social recommendations. However, many of these data position methods fundamentally rely on the white-box model, in which the attacker requires the adversary to have full knowledge of the target model and dataset (Li et al., 2016). That is, they crucially require direct access to the target model, as well as the dataset in recommender systems. For recommender systems as real-world application scenarios, expecting these kinds of complete access is not realistic. Therefore, it is desired to study black-box attacks in recommender system, where the attackers do not have full knowledge of the target model. Therefore, we propose a novel framework to attack under black-box setting to fill this gap.

3. Problem Statement

Let a target recommender system be defined as having a set of users and a set of items , where is the number of users and is the number of items in . In addition, user-item interactions are represented as the matrix , where an interaction indicates that user interacted with item (e.g., clicked/bought), and 0 otherwise. Furthermore, we define the set of items a user interacts with in (i.e., their user profile) as:

where denotes the sequential order of the items has interacted with (and the length can vary between users). We then denote the set of all user profiles in the target domain as .

We define the source recommender system similarly, having the set of users , set of items , interaction matrix , and set of user profiles . Note that the source domain is selected such that there are overlapping items between the target domain and source domain . In other words, there exists a set of items , where and the overlap (i.e., size of ) is assumed to be sufficiently large. Thus, we then define an item profile for , which is the set of users from who have interacted (e.g., purchased/clicked) with in as follows:

where is the number of user’s in the items profile (that can differ from item to item). Let denote the set of item profiles in target domain .

Now, given the notations of the target and source recommender systems and , respectively, we formally define the goal of the target recommender system . Overall, the objective of (which we denote here at ) is to predict whether user likes (i.e., will interact with) an item as . Thus, without loss of generality, the target recommender system task is to predict a list of Top- ranked potential items for each user. More formally, this recommendation is as follows:

where denotes the Top- candidate items for user . For completeness, we note that these candidate items in are ranked by , where user is more likely to click/purchase item than .

Finally, we define the problem of a black-box injection attack to promote a target item by copying a set of users from the source domain to the target domain, where is the budget given to the attacker (in terms of the number of profiles to copy). Note that these results in the target domain having the set of polluted users and thus also polluting the interaction matrix . More precisely, the pollution of is due to the fact that introducing the copied users brings their interactions with the set of items and hence disrupts the relations between users and items in . Furthermore, to be more specific, we define the promotion of a target item as having this item appear in the Top- recommendation list for users in that previously (before injecting the copied users and their associated interactions) did not have in their Top- recommendation list.

4. The Proposed Framework

In this section, we will first give an overview of the proposed framework, then provide details for each of the frameworks components, and finally discuss how to learn the model parameters.

Figure 2. An overview of the proposed framework. It contains three major components: user profile selection in B domain, user profile crafting, and Injection Attack and queries.

4.1. An Overview of the Proposed Framework

To perform attacking in recommender systems in the black-box setting, traditional gradient-based methods (Li et al., 2016; Christakopoulou and Banerjee, 2019) are not applicable. Thus, we propose a reinforcement learning (RL) based attack method, CopyAttack, to learn the strategy of copying cross-domain user profiles. This is because reinforcement learning provides a natural way to interact with a black-box recommender system. The architecture of CopyAttack is shown in Figure 2, which consists of three major components: user profile selection, user profile crafting, and injection attack and queries.

The first component is to perform user profile selection for specific target item attack, which is proposed to select user profiles from (i.e., user profiles from the source domain ). This can be seen in the left part of Figure 2. However, modeling this process of selection with reinforcement learning technique is rather challenging under limited resources (i.e., number of queries (or interactions) allowed to the target recommender system), since a huge number of user profiles (discrete action space) in source domain might lead to inefficiency and ineffectiveness at the same time. Moreover, not all the user profiles are useful to help attack the specific target item in the target recommender system. To address these challenges, we propose to adopt hierarchical-structure policy gradient (Arulkumaran et al., 2017; Chen et al., 2019; Sutton et al., 1998; Williams, 1992) with masking mechanism to efficiently learn the strategy of effectively selecting cross-domain user profiles, so as to maximize long-run rewards.

Next, once having selected a user profile from the first component, the second component is used for profile crafting. Here profile crafting aims to further modify the user profile by considering the reduction of attack cost and can be seen in the center part of Figure 2. We note that users can have user profiles consisting of varying lengths (i.e., number of items they have interacted with). Thus, it could be the case that not all the interactions that the user has given towards items in their user profile are helpful. Furthermore, too long of a user profile might include some noise as well as increase the attack cost (i.e., number of interactions the copied user would need to perform in the target domain). Hence, we introduce a second step policy gradient network to craft the the user profiles by considering this attacking cost issue. More specifically, this second step policy gradient network will decide what percentage of the user profile is kept around the target item .

Lastly, the third component’s first objective is to attack the target recommender system by copying the crafted cross-domain user profiles (i.e., those coming from the source domain). After having copied the crafted cross-domain user profile, queries on the target recommender system are performed to obtain some feedback in the form of Top- recommendations. This feedback is then used to form a reward for optimizing the whole framework (i.e., updating the policy gradient networks of the first and second components). This component can be seen in the right part of Figure 2.

Next, we will discuss an overview of the attacking environment of our black-box reinforcement learning based attacking method.

4.2. Attacking Environment Overview

The attacking black-box framework can be modeled as Markov Decision Process (MDP) 

(Gosavi, 2009). The definition of the MDP contains the state space , action set

, transition probability

, reward , and discount factor (i.e., ) that are defined as follows:

State . A state consists of all the intermediate injected user profiles at .

Action . The action has two components and is defined as . More specifically, the attacker is allowed to first select a user from the cross-domain (i.e., source domain) system at state . Then, the attacker can modify the original profile of to craft a profile of perhaps shorter length resulting in . Note that this crafted user profile would be the one ultimately injected into the target recommender system.

Transition probability . Transition probability defines the probability of state transition from the current to the next state when the attacker takes action .

Reward . The goal of the attacker is to attack a target item in the target recommender system with their desires (such as promotion/demotion of that target item). In this work, we focus on the promotion attack, where the attacker seeks to have the target items recommended to as many users as possible. A natural way to define the reward for the RL based method is on the basis of ranking evaluation measures. We note that this type of reward function based on ranking evaluation is quite general and could be used for either a promotion or demotion attack. Thus, for the reward function based on ranking, we assign a positive reward for action when the target item belongs to the Top- recommended list for users . More specifically, the set of users is a set of pretend users that the attacker had already established in the target domain before the injection attacks (as seen in Figure 2). We note that these pretend users solely exist in the target recommender system so that the attacker can use them as a proxy for determining how effective their copied user profiles are are at promoting the target items to all users in . We use the Hit Ratio (HR@K) as the ranking evaluation in our reward function for a given state and action , which we define as follows:

(1)

where returns the hit ratio for a targeted item in the Top- listing of the attackers pretend user (i.e., whether is in the set or not) and the reward is averaged over the hit ratio of all the pretend users in .

Terminal. The attacking process will stop when the number of actions reaches the budget . In addition, in the case when fewer user profiles are enough to successfully satisfies the promotion task, the process stops.

4.3. User Profile Selection via Hierarchical-structure Policy Gradient

User profile selection aims to learn the strategy of selecting cross-domain user profiles. More specifically, it seeks to discover the set of users that we can then inject their user profiles into the targeted recommender system’s set of users to achieve the goal of promoting a set of items. Here, the main challenges are how to handle a large-scale discrete action space (i.e., set of all user profiles) as well as achieve satisfied results under limited resources to interact with the target (black-box) recommender system . Most existing RL techniques cannot handle such a large discrete action space problem as the time complexity of making a decision is linear to the size of action space (Arulkumaran et al., 2017; Zhao et al., 2018, 2019; Dulac-Arnold et al., 2015; Chen et al., 2019). To address these challenges, we propose to utilize a hierarchical-structure policy gradient network with a masking mechanism to model the process of selecting a user profile (as shown in the left part in Figure 2

). More specifically, we construct a hierarchical clustering tree over cross-domain user profiles, where each leaf node is represented as a user profile and each non-leaf node is a policy network. Selecting a user profile in this hierarchical clustering tree is to seek a path from the root to a certain leaf of the tree.

4.3.1. Hierarchical Clustering Tree over Cross-domain User Profiles

In the hierarchical clustering tree, each leaf node is represented as a cross-domain user profile, while each non-leaf node is a policy network. However, the question remains how to construct the clustering tree. Hence, we propose to employ a top-down divisive approach that will repeatedly divide each cluster into small sub-clusters where leaf nodes under the same non-leaf node in the clustering tree should be more similar to each other than leaf nodes coming from another non-leaf node. We note that this process starts with the entire set of nodes at the root of the clustering tree.

When constructing our hierarchical clustering tree we further add the constraint that it should be balanced to ensure the proper speedup (as an unbalanced clustering tree in the worst case could result in a linked list of policy networks on the order of the number of users). Hence, we use K-mean clustering method 

(Lloyd, 1982) and further modify it such that it forms clusters of equal size (off by at most a single user in size). To achieve this, at each non-leaf node when constructing the tree (top down), we first apply the traditional K-mean clustering on that current set of users to obtain the set of centroids. Note that the number of cluster centers (i.e., centroids) is set to as the same number of child nodes in the hierarchical clustering tree. Then, we reassign the users to these centroids one at a time based on their Euclidean distance to ensure we have a balanced set of clusters (in terms of their size).

When constructing the clustering tree, one major consideration is how to balance the number of children per node against the height of the tree. To better understand this relationship between depth of hierarchical clustering tree , the number of leaf node , and the number of child node , we can observe the following:

and the number of non-leaf nodes of the tree is . In Section 5, we perform an analysis of our proposed framework CopyAttack where we vary this balance between and .

We note that numerous features of the users’ could be used for their representations, such as user attributes, review comments, and user-item interactions. In this work, we adopt the user-item interactions to represent the users because auxiliary information such as the user’s attributes and review comments are not available. We use the user representations learned via matrix factorization (MF) (Koren et al., 2009) to measure similarity between users.

4.3.2. Masking Mechanism

While cross-domain user profiles contain informative signal of items, due to the limited number of queries in the target recommender system, not all the cross-domain user profiles are useful for attacking a specific target item. Actually, only user profiles related to the specific target items would be useful. Therefore, we need to tune the hierarchical clustering tree with a masking mechanism to locate some percentage of related cross-domain user profiles for the target items. More specifically, for each target item, we take an approach of masking the cross-domain user profiles that do not include the target item. As shown in the left part in Figure 2, the path from non-leaf node to node is masked, since the cross-domain profiles of user and do not include the target item (with pink color). As such, these cross-domain user profiles (i.e., and ) can not be explored by the RL agent, which might further help reduce the action space. This reduction in the action space in turn is efficient then to locate useful cross-domain user profiles to perform an effective attack. We again note that the target item comes from the set . In other words, the target item exists in the source domain so the masking will never result in the entire tree being masked.

4.3.3. Hierarchical-structure Policy Gradient

With the hierarchical clustering tree, the purpose of user profile selection is to learn the policy for seeking a path from the root to a certain leaf of tree (i.e., user in ) at state

. Each non-leaf node in the tree is a policy gradient network, which can be modeled as a Multi-Layer Perceptron (MLP). As such, there are

policy gradient networks with in the hierarchical clustering tree.

In particular, the policy network at (having MLP parameters denoted as

) first takes the current state as input and outputs a probability distribution over all child nodes of

. Then, one of the children is selected to move based on the probabilities. The selection process then keeps moving down the clustering tree of policy networks until reaching a leaf node (i.e., a user profile), which can form the path of length from the root to the leaf node as follows:

This selection process can be decomposed to multiple steps according to selected path as follows:

We represent the state with the target item and previous selected users

. We combine them together with a Multi-Layer Perceptron (MLP). To decide which path we will move to, by estimating the probability distribution over the children at

(i.e., the policy network parameterized by ), as follows:

where is the pre-train item representation via Matrix Factorization (MF) coming from the source domain . We model the selected users at state with an RNN model and denote its representation as . Here we use to denote the concatenation operation. Also, here we seed the process by selecting action (i.e., the first user to inject in the target recommender system) at random, since at that time is empty and would not provide any insights from the RNN. We leave it as future work to investigate other methods of seeding this process, although a random action is one commonly performed in practice.

An illustration example of the process of selecting cross-domain user profiles is shown in the left part in Figure 2. We have 8 user profiles, and build a balanced hierarchical clustering tree with depth 3 over user profiles in the source domain . For a given state , the status point is initially located at the root (), and moves to one of its child nodes to () according to the probability distribution given by the policy network PN-1 corresponding to the root (). The process of selecting can stop when the state point arrives at a leaf node in the tree; in this case, user ’s profile. Note that at the state point , the path from to leaf node is masked since the profile of source domain user does not include the currently attacking target item. The example path for this selection is , as the path with green color in the figure.

Although we now have an efficient mechanism for selecting the set of source domain users that the attacker will copy into the target domain, we again note here that there could be some problems with directly copying these nodes. It could be the case that not all items in a user’s profile are useful in the promotion attack and could just inject noise and/or increase the attack cost. Hence, next we will introduce another policy gradient network that will learn how to craft user profiles by reducing the number of items in the user’s profile (i.e., the items they have interacted with).

4.4. User Profile Crafting

Not all the interactions towards items in cross-domain user profiles are helpful. Directly injecting the raw user profiles into the target recommender system may lead to increase the attacking budgets and include some noise. To address this challenge, we propose a clipping operation to craft the raw user profiles via policy network, as shown in the middle of Figure 2.

More specifically, we first discrete the length into 10 different levels as follows,

Then, a policy network is introduced to choose the action from the set to decide the length we keep (i.e., number of interactions for that selected user profile). As the raw selected user profile includes the target item , the raw user profile is clipped around the target item with the window size . As such, we can consider the forward and backward related items. For example, the selected raw profile of user with 10 items is as follows,

If the policy network takes the action , the new user profile through the clipping operation can keep around raw user profile as follows:

The state for model clipping operation can be decided by the selected user and target item . We estimate the probability of choosing action over the set with the state , as follows,

where and are the pre-trained user and item representations via MF in source domain, respectively. Also, we note that when considering how to craft the user profiles there are perhaps a few options that could be taken on how to utilize for reducing the user profile size. For example, intuitively randomly selecting a subset to keep would not make sense due to the fact it would lose the temporal relations of items that were interacted by the given user around the same time as the target item. Furthermore, if we were to select perhaps based on the most similar nodes to the target node from the user’s profile, then this might result in a less realistic user profile that could potentially more easily be detected. Hence, our selection of clipping the user profile with a window size around the target item indeed appears to be the logical mechanism for clipping.

4.5. Injection Attack and Queries

To perform attacking in the black-box setting, we only have query access to the target model and can get query feedback consisting of Top- recommended items for specific users. Hence, in CopyAttack’s last stage we actually inject the selected user profiles that we have crafted from the source domain to the target domain. Then, once injected, the attacker can utilize their set of pretend users they have already established in the target domain to gauge the effectiveness of the injected user profiles and define a corresponding reward. More specifically, here we use the reward function defined in Eq. (1) where the effectiveness is defined based on the hit ratio (HR@K) of the targeted item aggregated over the set of pretend users’ (i.e., those in the set ) Top- recommendations. We note that these Top- recommendations are the result/feedback upon performing queries of target system . Once obtaining the reward it is then used to update the policy networks for both the profile selection and profile crafting CopyAttack components.

5. Experiment

In this section, we conduct experiments to verify the effectiveness of our model. We first introduce the experimental settings, then discuss the results (i.e., performance comparison) of various baselines, and finally study the impact of different components in our model.

5.1. Experimental Settings

5.1.1. Datasets

We have used two cross-domain real-world datasets in our experiments to validate the performance of CopyAttack.

Datasets (Target, Source) (ML10M, Flixster) (ML20M, Netflix)
Target Domain # of Users 19,267 38,087
# of Items 6,984 8,325
# of Interactions 437,746 838,491
Source Domain # of Users 93,702 478,471
# of Overlapping
Items
5,815 5,193
# of Interactions 4,680,700 62,937,958
Table 1. Statistics of Two Datasets

MovieLen10M111https://grouplens.org/datasets/movielens/10m/ Flixster222https://sites.google.com/view/mohsenjamali/home (ML10M-FX). Both datasets are popular online platforms for movie recommendation services, in which they have millions of movies. Users in these two platforms can watch them and give their personal comments (e.g., rating). Here, we take Movielen10M (ML10M) dataset as the target domain, which is utilized to be attacked. Flixster (FX) dataset is treated as the source domain to be used to copy some user profiles to attack the Movielen10M (ML10M) domain. In these two datasets, they have a lot of items in common, where overlapping items can be aligned by the movie names. We only keep the interactions that have a rating score of . After filtering, this cross-domain dataset (ML10M-FX) has 5,815 overlapping items.

MovieLen20M333 https://grouplens.org/datasets/movielens/20m/ Netflix444https://www.kaggle.com/laowingkin/netflix-movie-recommendation (ML20M-NF). These two datasets are also online platforms for movie recommendation services. We take Movielen20M (ML20M) dataset as the target domain and Netflix (NF) is the source domain. We identify movies with the same name and the published year. We then perform filtering operations similar to the ML10M-FX dataset. In this cross-domain dataset we have 5,193 overlapping items.

The statistics of these datasets are presented in Table 1. Note that we only keep the overlapping items in the source domain.

5.1.2. Evaluation Metrics

In order to evaluate the quality of the recommender systems, we use two popular accuracy metrics for Top-K recommendation (He et al., 2017): Hit Rate (HR@K) and Normalized Discounted Cumulative Gain (NDCG@K). We set as 20, 10, and 5. Higher values of the HR@K and NDCG@K indicate a better predictive performance. As the ranking task is too time-consuming to rank all the items for all the users, we randomly sample 100 items that the user did not interact with and then rank the test item among them.

Dataset Algorithms HR@20 HR@10 HR@5 NDCG@20 NDCG@10 NDCG@5
# Average Items
per User Profile
ML10M- FX Without Attack 0.0378 0.0228 0.0220 0.0231 0.0195 0.0192 0
RandomAttack 0.0391 0.0230 0.0222 0.0233 0.0195 0.0192 46
TargetAttack40 0.1203 0.0583 0.0094 0.0353 0.0195 0.0041 495
TargetAttack70 0.1772 0.0854 0.0354 0.0569 0.0341 0.0181 818
TargetAttack100 0.1166 0.0520 0.0226 0.0369 0.0209 0.0114 1350
PolicyNetwork 0.1936 0.0665 0.0250 0.0570 0.0258 0.0126 705
CopyAttack-Masking 0.0376 0.0227 0.0220 0.0230 0.0195 0.0192 49
CopyAttack-Length 0.0857 0.0434 0.0198 0.0282 0.0177 0.0101 1280
CopyAttack 0.2596 0.1103 0.0415 0.0799 0.0425 0.0205 695
ML20M-NF Without Attack 0.0461 0.0043 0.0000 0.0115 0.0013 0.0000 0
RandomAttack 0.0468 0.0050 0.0000 0.0118 0.0015 0.0000 124
TargetAttack 40 0.1016 0.0405 0.0056 0.0288 0.0133 0.0024 203
TargetAttack70 0.1006 0.0402 0.0054 0.0285 0.0132 0.0023 321
TargetAttack100 0.0581 0.0006 0.0000 0.0139 0.0002 0.000 593
PolicyNetwork
CopyAttack-Masking 0.0500 0.0045 0.0000 0.0125 0.0001 0.0000 133
CopyAttack-Length 0.0655 0.0018 0.0000 0.0158 0.0005 0.0000 496
CopyAttack 0.2704 0.124 0.0797 0.0969 0.0609 0.0467 255
Table 2. Performance comparison of different attacking methods for recommender systems

5.1.3. Attacking Environment and Parameter Settings

Graph Neural Networks (GNNs) based techniques are the state-of-the-art models for recommender systems (Wang et al., 2019). The popular GNNs model in recommendations, PinSage, for item recommendations (Ying et al., 2018), which aggregates the local neighbors (users/items) in an inductive way, has been applied in industry (Ying et al., 2018; Hamilton et al., 2017). Therefore, we adopt this model as our target model, where user and items representations can be learned via aggregating their local neighbors (items/users).

To train this target recommender systems, we randomly split the target domain datasets, where we have as a training set for learning the parameters, as a validation set to tune hyper-parameters, and

as the test set. For all neural network methods, we randomly initialized model parameters with a Gaussian distribution, where the mean and standard deviation is 0 and 0.1, respectively. The learning rate and embedding size are set to be 0.001 and 8. The early stopping strategy was performed, where we stopped training if the HR@10 on validation set increased for 5 successive epochs. After completing training, the final performance on testing datasets is 0.549 with HR@10 metrics for ML-10M dataset, and 0.5474 for ML-20M dataset. After training, the recommender systems in the target domain are fixed, where the model structure and model parameter can not be changed. Then, we use the well-trained model in a black-box attacking environment and evaluate the different attacking performance.

We randomly sample 50 target items with less than 10 interactions to perform attacking on the target domain. Without being specifically mentioned, the main budget for attacking is the number of cross-domain profiles, where we set the maximum budget as 30. The number of pretend user in is set to 50 for both datasets. To get the feedback from the target system, we perform queries on the target system after each 3 injections.

We implemented the proposed method on the basis of Tensorflow. The learning rate, the size of action, and discount factor are set to 0.001, 8, and 0.6, respectively. The hierarchical clustering tree is set to 3 layers for Flixster dataset and 6 layers for Netflix dataset. The user and item representation is trained with Matrix Factorization techniques, where we use same hyper-parameters to train (learning rate, embedding, etc).

5.1.4. Baselines

Most of existing attacking methods in recommender systems are under white-box setting, where they assume the attack can have full knowledge of the target model (e.g., model structure, parameters) and access the datasets. There is not existing black-box attack for recommender systems. We build some baselines to evaluate the performance of attacking as follows:

RandomAttack: This baseline is proposed to randomly sample cross-domain user profiles to attacking the target recommender systems. TargetAttack40: Rather than randomly sampling user profiles from source domain, this baseline is to sample the user profile from the source domain with the target item which is going to be attacked. Moreover, we apply the user profile crafting operations as our proposed model to reserve 40 of user profiles. TargetAttack70: This baseline is similar with TargetAttack40, while setting the length of user profile as 70. TargetAttack100: This method is used to directly random sample user profiles including target items from source domain, without further crafting the selected user profile as TargetAttack40 and TargetAttack70.

In addition, we also build some baselines based on our proposed methods as follows:

PolicyNetwork: This method directly uses the policy gradient on the action space, without considering the hierarchical clustering tree. CopyAttack-Masking: This method is used to evaluate the effectiveness of masking mechanism in our proposed framework. In other words, the attack can select any user profile in the source domain. Note that the user profile crafting operation in this baseline is also be removed, since the attack has larger probability to select the user profile without the target items. CopyAttack-Length: This method is used to evaluate the effectiveness of user profile crafting operation in our proposed framework, where we remove the user profile crafting operation.

(a) 10M HR@20
(b) 10M NDCG@20
(c) 20M HR@20
(d) 20M NDCG@20
Figure 3. Effect of Depth on Hierarchical Clustering Tree.
(a) 10M HR@20
(b) 10M NDCG@20
(c) 20M HR@20
(d) 20M NDCG@20
Figure 4. Effect of Item Popularity.
(a) 10M HR@20
(b) 10M NDCG@20
Figure 5. Effect of Budget (Cross-domain User Profiles) on ML10M-FX.

5.2. Performance Comparison of Recommender Systems

We first compare the attacking performance of all methods. Table 2 shows the overall attacking performances on different methods HR@K and NDCG@K on ML10M-FX and ML20M-NF datasets. We have the following main findings.

Randomly sampling cross-domain user profiles without any strategies can not help promote the target items. When sampling user profiles with the sampling strategy where the user profiles should include the target items, the performance can be improved significantly. In addition, when we constrain the sampling cross-domain user profile scope into the users who include the target items, this kind of method can obtain much better performance. This indicates the user profiles with the target item are informative to help perform attacking.

When considering the length of cross-domain user profiles, the methods without target item constraint have very low item budget (less than 50). When harnessingg this constraint on different TargetAttack-(40, 70, 100), we found that the methods on user profile without crafting perform the worse. It implies that introducing the user profile crafting is important. We will further analyse the budget from the number of user profile perspective in next section.

To better understand CopyAttack, we compare with PolicyNetwork, CopyAttack-Masking, and CopyAttack-Length. We can see that, for PolicyNetwork method, the performance of CopyAttack degrades when eliminating the effect of the hierarchical clustering tree. Note that PolicyNetwork method on ML20M-NF does not work, since we can not obtain its results in 48 hours, while we can obtain the results of others in just few hours. These observations suggest the power of the hierarchical clustering tree. We also further study the impact of the hierarchical clustering tree on next section. Meanwhile, when we remove the user profile crafting component, the promotion performance decrease too much and the item budget is very huge, since the selected user profiles might introduce too much noise and degrade the performance. Moreover, when the masking mechanism is removed upon the CopyAttack-Length, CopyAttack-Masking performs much worse. These results support that the masking mechanism and user profile crafting component are beneficial to select strong user profiles and reduce the item budget for each user profile.

5.3. Model Analysis

In this subsection, we study the impact of model components and model hyper-parameters.

5.3.1. Effect of Depth on Hierarchical Clustering Tree.

The hierarchical clustering tree, as discussed in Section 4.3.1, is investigated here where we have shown the performance when varying the depth of the tree (i.e., the value of ). We can observe in Figure 3 that for 20M performs the best in terms of HR@20 and NDCG@20. Similarly, in 20M performs the best. The reason for this is believed to be the trade of in terms of how detailed the clusters can be and the number of policy networks. This is because the deeper the tree we have more policy networks that need to be learned. In comparison, shallower trees have less policy networks, but can harness the efficiency in terms of run-time and ability to have a few large clusters to guide the source user profile selection.

5.3.2. Effect of Item Popularity.

In this section, we study what kinds of items are vulnerable to attack. To achieve it, we group the item in target domain based on their popularity. Specifically, we have 10 different groups, where each group account for of items in target domain. We then sample 50 target items from these 10 different groups respectively. At last, we evaluate the performance on them. Th results are given on Figure 4. We note that the target items with high popularity can be vulnerable to attack, where the top of items are vulnerable.

5.3.3. Effect of Budget (Cross-domain User Profiles).

To perform attacking under black-box attack, the budget is very important. In this section, we investigate how the budget affect the performance on different attacking methods. Figure 5 show the performance with varied budget on ML10M-FX dataset. We first note, the RandomAttack remains stable not matter how many user profiles. When the value of budget increase, the performance of methods injecting user profile with target items tends to increase first. And then TargetAttack40, TargetAttak70, and TargetAttack100 can not keep increasing when budget becomes too large, while CopyAttack keep increasing since this method perform queries and get more and more reward to train the attack. The results on ML20-NF is shown at Supplementation Section.

6. Conclusion and Future work

Many user-oriented online services make use of deep learning based recommender systems to suggest personalized lists for users to interact with. Although works have shown that these models are susceptible to attack, more recent studies have shown that state-of-the-art defense strategies are able to detect data poisoning attacks in recommender systems. This is primarily due to the fact that injected fake user profiles are easily detected. Hence, in this work we have proposed a cross-domain approach to copy users from a source domain to the target domain towards the goal of promoting certain target items. More specifically, we have introduced a reinforcement learning based black-box approach that makes use of policy gradient networks to first select users to copy, refines/crafts their profiles, and finally injects them in the target domain where we can then observe some feedback in terms of Top- recommendations on our set of pretend users. These pretend users are then used to determine the reward for updating our model parameters.

Our thorough experiments on two real-world datasets show the superiority of the proposed framework, CopyAttack, over a set of competitive baselines. Then, we furthermore performed model analysis to better understand the behavior of CopyAttack. Our future work will be towards effective strategies for targeted attacks on items that need not be in the source domain and also for demotion and furthermore include more rich side information.

References

  • (1)
  • Arulkumaran et al. (2017) Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. 2017. Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine 34, 6 (2017), 26–38.
  • Cai and Zhang (2019) Hongyun Cai and Fuzhi Zhang. 2019. Detecting shilling attacks in recommender systems based on analysis of user rating behavior. Knowledge-Based Systems 177 (2019), 22–43.
  • Cantador et al. (2015) Iván Cantador, Ignacio Fernández-Tobías, Shlomo Berkovsky, and Paolo Cremonesi. 2015. Cross-domain recommender systems. In Recommender systems handbook. Springer, 919–959.
  • Chen et al. (2019) Haokun Chen, Xinyi Dai, Han Cai, Weinan Zhang, Xuejian Wang, Ruiming Tang, Yuzhou Zhang, and Yong Yu. 2019. Large-scale interactive recommendation with tree-structured policy gradient. In AAAI.
  • Chen et al. (2018) Keke Chen, Patrick PK Chan, and Daniel S Yeung. 2018. Shilling attack detection using rated item correlation for collaborative filtering. In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 3553–3558.
  • Christakopoulou and Banerjee (2019) Konstantina Christakopoulou and Arindam Banerjee. 2019. Adversarial attacks on an oblivious recommender. In Proceedings of the 13th ACM Conference on Recommender Systems. 322–330.
  • Dai et al. (2018) Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial Attack on Graph Structured Data. In

    Proceedings of the 35th International Conference on Machine Learning, PMLR

    , Vol. 80.
  • Dulac-Arnold et al. (2015) Gabriel Dulac-Arnold, Richard Evans, Hado van Hasselt, Peter Sunehag, Timothy Lillicrap, Jonathan Hunt, Timothy Mann, Theophane Weber, Thomas Degris, and Ben Coppin. 2015. Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679 (2015).
  • Fang et al. (2018) Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, and Jia Liu. 2018. Poisoning attacks to graph-based recommender systems. In Proceedings of the 34th Annual Computer Security Applications Conference. 381–392.
  • Goodfellow et al. (2014) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
  • Gosavi (2009) Abhijit Gosavi. 2009. Reinforcement learning: A tutorial survey and recent advances. INFORMS Journal on Computing 21, 2 (2009), 178–192.
  • Hamilton et al. (2017) Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems. 1024–1034.
  • He et al. (2017) Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. International World Wide Web Conferences Steering Committee, 173–182.
  • Koren et al. (2009) Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 8 (2009), 30–37.
  • Lam and Riedl (2004) Shyong K Lam and John Riedl. 2004. Shilling recommender systems for fun and profit. In Proceedings of the 13th international conference on World Wide Web.
  • Li et al. (2016) Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. In Advances in neural information processing systems. 1885–1893.
  • Lloyd (1982) Stuart Lloyd. 1982. Least squares quantization in PCM. IEEE transactions on information theory 28, 2 (1982), 129–137.
  • Sutton et al. (1998) Richard S Sutton, Andrew G Barto, et al. 1998. Introduction to reinforcement learning. Vol. 135. MIT press Cambridge.
  • Wang et al. (2019) Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval. 165–174.
  • Welling and Teh (2011) Max Welling and Yee W Teh. 2011. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11). 681–688.
  • Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8, 3-4 (1992), 229–256.
  • Wu et al. (2012) Zhiang Wu, Junjie Wu, Jie Cao, and Dacheng Tao. 2012. HySAD: A semi-supervised hybrid shilling attack detector for trustworthy product recommendation. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. 985–993.
  • Yang et al. (2017) Guolei Yang, Neil Zhenqiang Gong, and Ying Cai. 2017. Fake Co-visitation Injection Attacks to Recommender Systems.. In NDSS.
  • Ying et al. (2018) Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018.

    Graph convolutional neural networks for web-scale recommender systems. In

    Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 974–983.
  • Yu et al. (2017) Junliang Yu, Min Gao, Wenge Rong, Wentao Li, Qingyu Xiong, and Junhao Wen. 2017. Hybrid attacks on model-based social recommender systems. Physica A: Statistical Mechanics and its Applications 483 (2017), 171–181.
  • Zhang et al. (2015) Yongfeng Zhang, Yunzhi Tan, Min Zhang, Yiqun Liu, Tat-Seng Chua, and Shaoping Ma. 2015. Catch the black sheep: unified framework for shilling attack detection based on fraudulent action propagation. In

    Twenty-Fourth International Joint Conference on Artificial Intelligence

    .
  • Zhao et al. (2019) Xiangyu Zhao, Long Xia, Jiliang Tang, and Dawei Yin. 2019. Deep reinforcement learning for search, recommendation, and online advertising: a survey. ACM SIGWEB Newsletter Spring (2019), 1–15.
  • Zhao et al. (2018) Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, and Dawei Yin. 2018. Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 1040–1048.
  • Zügner et al. (2018) Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2847–2856.

Appendix A Supplementation

We include the experiment result about the effect of budget for understanding our proposed method in Figure 6. We again note that in this dataset the PolicyNetwork baseline was unable to finish in a reasonable time limit of 48 hours, so we do not report their performance. This also further strengthens the usefulness of the hierarchical clustering tree as compared to a single policy gradient network for the entire action space of all users (in the source domain), since CopyAttack obtains the results in just a few hours (e.g.,  3 hours). Please note that we will release our code upon the acceptance of this paper for reproducibility.

(a) 20M HR@20
(b) 20M NDCG@20
Figure 6. Effect of Budget (Cross-domain User Profiles) on ML20M-NF.