Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems

06/14/2020 ∙ by Yuanjiang Cao, et al. ∙ University of Technology Sydney The University of Adelaide UNSW 0

Adversarial attacks pose significant challenges for detecting adversarial attacks at an early stage. We propose attack-agnostic detection on reinforcement learning-based interactive recommendation systems. We first craft adversarial examples to show their diverse distributions and then augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data. Finally, we study the attack strength and frequency of adversarial examples and evaluate our model on standard datasets with multiple crafting methods. Our extensive experiments show that most adversarial attacks are effective, and both attack strength and attack frequency impact the attack performance. The strategically-timed attack achieves comparative attack performance with only 1/3 to 1/2 attack frequency. Besides, our black-box detector trained with one crafting method has the generalization ability over several crafting methods.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Interactive recommendation systems capture dynamic personalized user preferences by improving their strategies continuously (Mahmood and Ricci, 2007; Thompson et al., 2004; Taghipour and Kardan, 2008)

. They have attracted enormous attention and been applied in leading companies like Amazon, Netflix, and Youtube. The traditional methods to model user-system interactions include Multi-Armed Bandit (MAB) or Reinforcement Learning (RL). The former views action choices as a repeated single process, while the latter considers immediate and future rewards to model behavors’ long-term benefits. RL-based systems employ a Markov Decision Process (MDP) agent that estimates the value based on both actions and states, rather than merely on actions as done by MAB.

However, reinforcement learning-based models can be fooled by small disturbances on the input data (Szegedy et al., 2013; Goodfellow et al., 2014). Small imperceptible noises, such as adversarial examples, may increase prediction error or reduce reward in supervised and RL tasks—the input noise can be transferred to attack different parameters even different models, including recurrent network and RL (Gao et al., 2018; Huang et al., 2017)

. Besides, the embedding vectors of users, items and relations are piped into RL-based recommendation models, making it challenging for humans to tell the true value or to dig out the real issues in the models. Attackers can easily leverage such characteristics to disrupt recommendation systems silently, making defending adversarial attacks a non-trivial task for RL-based recommendation systems.

In this work, we aim to develop a general detection model to detect attacks and increase the defence ability, which provides a practical strategy to overcome the dynamic ‘arm-race’ of attacks and to defend in the long run. We make the following contributions:

  • We systematically investigate adversarial attacks and detection approaches with a focus on reinforcement learning-based recommendation systems and demonstrate the effectiveness of the designed adversarial examples and strategically-timed attack.

  • We propose an encoder-classification detection model for attack-agnostic detection. The encoder captures the temporal relationship among sequence actions in reinforcement learning. We further use an attention-based classifier to highlight the critical time steps out of ample interactive space.

  • We empirically show that even small perturbations can reduce the performance of most attack methods significantly. Our statistical validation shows that multiple attack methods generate similar actions of the attacked system, providing insights into improving the detection performance.

Figure 1. Our proposed Adversarial Attack and Detection Approach for RL-based Recommender Systems.

2. Methodology

2.1. RL-based Interactive Recommendation

Interactive recommendation systems suggest items to users and receives feedback. Given a user , a set of items , and the user feedback history , the recommendation system suggests a new item . This problem represents a Markov Decision Process as follows:

  • State (): a historical interaction between a user and the recommendation system computed by an embedding or encoder module.

  • Action (): an item or a set of items recommended by the RL agent.

  • Reward (): a variable related to a user’s feedback to guide the reinforcement model towards true user preference.

  • Policy (

    ): a conditional probability distribution of items which the agent might recommend to a user

    given the state of last time step . The learning process aims to get an optimal policy.

  • Value function (): the agent’s estimation of reward of current states and recommended item

    . We define the reward as the cosine similarity between user and item embedding vectors.

The reinforcement agent could follow REINFORCE with baseline or Actor-Critic algorithm that both consist of a value network and a policy network (Xian et al., 2019). The attack model may generate adversarial examples using either the value network (Huang et al., 2017) or the policy network(Pattanaik et al., 2017).

2.2. Attack Model

FGSM-based attack. We define an adversarial attack as an additional small perturbation on benign examples , which can be a composition of embedding vectors of users, relations and items (Xian et al., 2019). Unlike perturbations on images or texts, can be large due to the enormous manual work to check the embedding vectors. We define an adversarial example as


Attack with smaller frequency. The strategically-timed attack (Lin et al., 2017) aims to decreases the attack frequency without sacrificing the performance of the un-targeted reinforcement attack. We formally present it below:



is a binary variable that controls when to attack;

is the frequency of adversarial examples. There are two approaches to generate the binary sequence

optimizing a hard integer programming problem and generating sequences via heuristic methods. Let

be the two maximum probability of an policy , be the attack mask on time step, which is different from (Lin et al., 2017):

We pick out actions that have the max distance between the highest two probabilities, which means we attack on the most confident actions of the agent. Experiments show that this strategy works. In contrast, Jacobian-based Saliency Map Attack (JSMA) (Papernot et al., 2016) and Deepfool (Moosavi-Dezfooli et al., 2016) are based on the gradient of actions rather than the gradient of value. One key component of JSMA is saliency map computation, which decides which dimension of vectors (in Image classification is pixels) are modified. Deepfool pinpoints the attack dimension by comparison of affine distances between some class and temporal class.

2.3. Detection Model

The detection model is a supervised classifier detects adversarial examples with actions of the reinforcement agent. Suppose the action distributions of an agent are shifted by adversarial examples (Section 3 shows statistical evidence of the drift). Given an abnormal action sequence

, the detection model aims to establish separating hyperplane between adversarial examples and normal examples, thereby measuring the probability

or , where is a binary variable indicating whether the input data are attacked.

To detect the adversarial examples presented in the last section, we employ an attention-based classifier. The detection model consists of two parts. The first is a GRU encoder, to encode the action methods into a low dimensional feature vector. The second is an attention based decoder with classifier to separate different data. This encoder-decoder model has a bottleneck structure that filters out noisy information. The formulation of GRU is as follows:


We use to denote action sequence, which is a series of user relation vectors or item embedding vectors. is the output of encoder.

The attention-based decoder is formulated below.


where is the combined vector of action embedding, hidden states

and encoder output, the combination method is a linear unit with ReLU activation. GRU is reused to generate multiple hidden states at each time steps. Loss is computed at each time step. After processed through the attention model and GRU, the vector is then piped into a linear unit with sigmoid function to predict if the agent is polluted. The loss function is the cross entropy between the true label and corresponding probability,

3. Experiments

In this section, we report our experiments to evaluate attack methods and our detection model.

3.1. Dataset and Experiment Setup

We conduct experiments following (Chen et al., 2019) and (Xian et al., 2019) over a Amazon dataset (He and McAuley, 2016). This public dataset contains user reviews and metadata of the Amazon e-commerce platform from 1996 to 2014. We utilize three subsets, namely Beauty, Cellphones, and Clothing, which are originally provided by (Xian et al., 2019) on Github . Details about Amazon dataset analysis can be found in (Xian et al., 2019).

Our experiments are based on (Xian et al., 2019). During dataset preprocessing,feature words with higher TF-IDF scores than 0.1 are filtered out. 70% of data in each dataset comprises the training set (the rest are the test set). We take actions of reinforcement agent as the detection data. We define the actions of PGPR (Xian et al., 2019) as heterogeneous graph paths that start from users and have a length of 4. The three Amazon sub-datasets (Beauty, Cellphones, and Clothing) contain 22,363, 27,879, and 39387 users, respectively. To accelerate experiments, we use the first 10,000 users of each dataset to produce adversarial examples. Users in Beauty get, on average, 127.51 paths. The counterparts for Cellphones and Clothing are 121.92 and 122.71. As the number of paths is large, we utilize the first 100,000 paths for train and validation with split ratio 80/20. We randomly sampled 100,000 paths from each action file to form the test set.

We attack trained RL agent with methods in section 2.2. We slightly modify JSMA and Deepfool for our experiments—we create the saliency map by calculating the product of the target label and temporal label and achieve both effectiveness and higher efficiency (by 0.32 seconds per iteration) of JSMA; we decrease the computation load on a group of gradients on Deepfool by sampling. Besides, we set the hidden size of the GRU to 32 for the encoder, the drop rate of the attention-based classifier to 0.5, the maximum length of a user-item path to 4, the learning rate and weight decay of the optimization solver, Adam, to 5e-4 and 0.01, respectively.

3.2. Attack Experiments

Adversarial attack results.

We are interested in how vulnerable the agent is to perturbation in semantic embedding space. An attack will be effective if a small perturbation leads to a notable performance reduction. We reuse the evaluation metrics of

(Xian et al., 2019), namely Normalized Discounted Cumulative Gain (NDCG), Recall, Hit Ratio (HR), and Precision for evaluation. All metrics are computed based on the top 10 sorted predictions for each user, and they are presented in percentage without specific notion.

Table 1 shows the performance of different attack methods. Attack results share the same trend with the distribution discrepancy. Most attack methods significantly reduce the performance of the reinforcement system. FGSM achieves the best performance. It reveals that single dimension attack can change the agent’s action drastically. While FGSM is less effective, where the metrics just fluctuates around the original baseline (Table 1), partly because of small disturbance created by method. Specifically, JSMA achieves comparable results as with a small attack region. Attacks on Clothing and Cellphones sub-datasets show similar effects.

Data Parameters NDCG Recall HR Precision MMD-org MMD-
Original - 4.654 6.572 13.993 1.675 0.121 0.620
FGSM 0.1 2.695 3.714 6.599 0.693 0.604 0.010
FGSM 1.0 4.567 6.555 13.751 1.653 0.016 0.573
FGSM 0.5 2.830 3.909 7.351 0.787 0.570 0.011
JSMA - 2.984 3.844 8.254 0.931 0.412 0.034
Deepfool - 3.280 4.352 9.548 1.050 0.177 0.458
Table 1. Adversarial attack results, MMD between benign distribution and adversarial distribution on Amazon Beauty

Impact of attack frequency. We conduct two experiments on attack frequency, random attack and strategically attack. The difference is if adversarial examples are crafted with a frequency parameter , or generated by the method shown in Section 2.2. The NDCG metric is presented in Figure 2; other metrics have a similar trend. It can be seen from 2 that the random attack performs worse than the strategically-timed attack. With strategically timed method, attacking time steps achieves a significant reduction in all metrics.

Figure 2. NDCG of attack frequency on Beauty and Clothing subsets. Dashdot lines represent random attacks, solid lines are strategically-timed attacks. Blue and green lines are FGSM and attacks respectively.

Analysis of adversarial examples. We use Maximum Mean Discrepancy as statistical measures to capture distribution distance. This divergence is defined as:


is the kernel function, i.e., a radial basis function,

are benign and adversarial examples.

MMD-org reveals the discrepancy between the original and adversarial datasets. And MMD- presents the discrepancy among different attack methods. The results (Table 1) show that the adversarial distribution is different from the original distribution. Also, the disturbed distributions are closed to each other regardless of the attack type. This insight makes it clear that we can use a classifier to separate benign data and adversarial data and it can detect several attacks at the same time, which might be transferred to other reinforcement learning attack detection tasks.

3.3. Detection Experiments

From a statistical perspective, the above analysis shows that one classifier can detect multiple types of attacks. We evaluate the detection performance using Precision, Recall and F1 score.

Our attention-based detection model is trained on FGSM attack with at 0.1 and detects all attack types. The results (Table 2) show that the attack are stronger, the model achieves better performance. attack validates this trend, which shows that our model can detect weak attacks as well. The result of detection on attack can be reasoned with MMD analysis shown above, high precision and low recall show that most adversarial examples are close to benign data which confuses the detector. The attack with validates that our detector performs well yet achieves worse performance on other tests of Cellphones dataset.

Dataset Attack Precision Recall F1 Score
Beauty 0.1 0.919 0.890 0.904
1.0 0.605 0.119 0.199
0.5 0.918 0.871 0.894
JSMA 0.910 0.793 0.848
Deepfool 0.915 0.840 0.876
Cellphones 0.1 0.801 0.781 0.791
1.0 0.754 0.593 0.664
0.5 0.795 0.752 0.773
1.0 0.810 0.825 0.817
Clothing 0.1 0.911 0.866 0.888
1.0 0.541 0.099 0.168
0.5 0.912 0.879 0.895
Dataset Frequency Precision Recall F1 Score
Beauty 0.02 0.823 0.362 0.503
0.08 0.918 0.872 0.894
0.3 0.922 0.927 0.924
Dataset Frequency Precision Recall F1 Score
Beauty 0.579 0.921 0.912 0.917
0.316 0.918 0.879 0.898
0.118 0.837 0.401 0.543
Table 2. Detection Result & Factor Analysis

Our results on factor analysis (Table 2) show that the detection model can detect attacks even under low attack frequencies. But the detection accuracy decreases as the attack frequency drops—the recall reduces significantly to 40.1% when 11.8% examples represent attacks.

4. Conclusion

Adversarial attacks on reinforcement learning-based recommendation system can degrade user experience. In this paper, we systematically study adversarial attacks and their factor impacts. We conduct statistical analysis to show classifiers, especially an attention-based detector, can well separate the detection data. Our extensive experiments show both our attack and detection models achieve satisfactory performance.


  • H. Chen, X. Dai, H. Cai, W. Zhang, X. Wang, R. Tang, Y. Zhang, and Y. Yu (2019) Large-scale interactive recommendation with tree-structured policy gradient. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    Vol. 33, pp. 3312–3320. Cited by: §3.1.
  • J. Gao, J. Lanchantin, M. L. Soffa, and Y. Qi (2018) Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pp. 50–56. Cited by: §1.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and Harnessing Adversarial Examples. External Links: 1412.6572, Link Cited by: §1.
  • R. He and J. McAuley (2016) Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pp. 507–517. Cited by: §3.1.
  • S. Huang, N. Papernot, I. Goodfellow, Y. Duan, and P. Abbeel (2017)

    Adversarial Attacks on Neural Network Policies

    External Links: Link Cited by: §1, §2.1.
  • Y. Lin, Z. Hong, Y. Liao, M. Shih, M. Liu, and M. Sun (2017) Tactics of Adversarial Attack on Deep Reinforcement Learning Agents. External Links: Link Cited by: §2.2.
  • T. Mahmood and F. Ricci (2007) Learning and adaptivity in interactive recommender systems. In Proceedings of the ninth international conference on Electronic commerce, pp. 75–84. Cited by: §1.
  • S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    pp. 2574–2582. Cited by: §2.2.
  • N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami (2016) The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. Cited by: §2.2.
  • A. Pattanaik, Z. Tang, S. Liu, G. Bommannan, and G. Chowdhary (2017) Robust Deep Reinforcement Learning with Adversarial Attacks. External Links: Link Cited by: §2.1.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. External Links: 1312.6199, Link Cited by: §1.
  • N. Taghipour and A. Kardan (2008) A hybrid web recommender system based on q-learning. In Proceedings of the 2008 ACM symposium on Applied computing, pp. 1164–1168. Cited by: §1.
  • C. A. Thompson, M. H. Goker, and P. Langley (2004) A personalized system for conversational recommendations. Journal of Artificial Intelligence Research 21, pp. 393–428. Cited by: §1.
  • Y. Xian, Z. Fu, S. Muthukrishnan, G. de Melo, and Y. Zhang (2019)

    Reinforcement knowledge graph reasoning for explainable recommendation

    arXiv preprint arXiv:1906.05237. Cited by: §2.1, §2.2, §3.1, §3.1, §3.2.