Privacy-Aware Recommendation with Private-Attribute Protection using Adversarial Learning

11/22/2019
by   Ghazaleh Beigi, et al.
Arizona State University
0

Recommendation is one of the critical applications that helps users find information relevant to their interests. However, a malicious attacker can infer users' private information via recommendations. Prior work obfuscates user-item data before sharing it with recommendation system. This approach does not explicitly address the quality of recommendation while performing data obfuscation. Moreover, it cannot protect users against private-attribute inference attacks based on recommendations. This work is the first attempt to build a Recommendation with Attribute Protection (RAP) model which simultaneously recommends relevant items and counters private-attribute inference attacks. The key idea of our approach is to formulate this problem as an adversarial learning problem with two main components: the private attribute inference attacker, and the Bayesian personalized recommender. The attacker seeks to infer users' private-attribute information according to their items list and recommendations. The recommender aims to extract users' interests while employing the attacker to regularize the recommendation process. Experiments show that the proposed model both preserves the quality of recommendation service and protects users against private-attribute inference attacks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

01/29/2021

Graph Embedding for Recommendation against Attribute Inference Attacks

In recent years, recommender systems play a pivotal role in helping user...
05/13/2018

AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning

Users in various web and mobile applications are vulnerable to attribute...
10/16/2020

PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation

Transfer learning is an effective technique to improve a target recommen...
05/25/2020

Joint Item Recommendation and Attribute Inference: An Adaptive Graph Convolutional Network Approach

In many recommender systems, users and items are associated with attribu...
08/03/2018

DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes

Over the past decade, side-channels have proven to be significant and pr...
10/22/2020

Zoom on the Keystrokes: Exploiting Video Calls for Keystroke Inference Attacks

Due to recent world events, video calls have become the new norm for bot...
08/03/2020

Attribute-aware Diversification for Sequential Recommendations

Users prefer diverse recommendations over homogeneous ones. However, mos...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Recommendation systems play an important role in helping users find relevant and reliable information that is of potential interest (Koren, 2009). These systems build profiles that represent user’s interests (Konstan and Riedl, 2012; Beigi and Liu, 2018b) and recommend relevant items to the users based on the constructed profiles (Rashid et al., 2002). Despite the effectiveness of recommendation systems, they can be sources of user privacy breach. Existing work has shown that if malicious attackers have access to the system’s output and unrestricted auxiliary information about their targets, they are able to extract their entire user-item interactions history (Ramakrishnan et al., 2001; Calandrino et al., 2011; McSherry and Mironov, 2009; Beigi and Liu, 2018a). One main reason is that recommendation systems’ outputs (i.e., product recommendation) are partially derived from other users’ choices (i.e., user-item interactions history). Thus, privacy concerns arise.

One of privacy issues is the re-identification attack where a malicious adversary attempts to infer user’s actual ratings by seeking if a target user is in the database (Beigi and Liu, 2018a). Prior research on privacy preserving recommendation systems has extensively addressed this type of privacy breach. Common techniques include (1) modifying the output of the recommendation system algorithm so that the absence or presence of a single rating or an entire user data is masked (i.e., differential privacy based techniques) (McSherry and Mironov, 2009; Hua et al., 2015; Zhu and Sun, 2016); and (2) coarsening the user’s interactions history by adding dummy items and ratings such that the adversary cannot deduce the user’s actual ratings and preferences (i.e., perturbation based techniques) (Rebollo-Monedero et al., 2011; Polat and Du, 2003; Luo and Chen, 2014).

Another privacy issue is the disclosure of user private-attribute information through leaked users’ interactions history (Weinsberg et al., 2012; Beigi et al., 2019c). Private attribute information contains those attributes that users do not wish to disclose such as age, gender, occupation and location. This type of privacy breach is known as the private-attribute inference attack in which the adversary’s goal is to infer private attributes of target users given their interactions history. Little has been done to protect users against this attack of private-attribute inference (Jia and NZhenqiang, 2018; Weinsberg et al., 2012; Beigi et al., 2019b, c) with focus on anonymizing user-item data before publishing it. Data obfuscation comes at the cost of utility loss where utility is defined as the quality of service users receive. The existing work addresses the utility loss by minimizing the amount of changes made to the data (Jia and NZhenqiang, 2018; Weinsberg et al., 2012). However, in the context of recommendation, the utility loss due to this approach can lead to degraded recommendation results. Moreover, just sharing perfectly obfuscated user-item data with a recommendation system does not necessarily prevent the adversary from inferring users’ private information in future when they receive and accept new recommendations (e.g., when purchasing new products).

This research aims to devise a mechanism to counter private-attribute inference attacks in the context of recommendation systems. We propose a privacy-aware Recommender with Attribute Protection, namely RAP, which offers relevant products in a way that makes any inference of user’s private attributes difficult from his interactions history and recommendations. The proposed model seeks to concurrently prevent the leakage of users’ private attribute information while retaining high utility for users.

Recommendation while countering private-attribute inference attack can be naturally formulated as a problem of adversarial learning (Goodfellow et al., 2014). In our proposed RAP, there are two components: a Bayesian personalized ranking recommender and a private-attribute inference attacker (illustrated in Figure. 1). The private-attribute inference attacker seeks to accurately infer users’ private attribute information. The attacker aims to iteratively adapt its model with respect to the existing recommender. The recommender extracts latent representations of users and items for personalized recommendation, and simultaneously utilizes the private-attribute inference attacker to regularize the recommendation process by incorporating necessary constraints to fool the attacker. Therefore, RAP optimizes a composition of two conflicting objectives, modeled as a min-max game between recommender and attacker components. Its objective is to recommend relevant, ranked items to users such that a potential adversary cannot infer their private attribute information.

In essence, we investigate the following research issues: (1) whether we can develop a personalized privacy-aware recommendation system to guard against private-attribute inference attacks; and (2) how we can ensure that the user’s private attributes are effectively obscured after receiving personalized recommendation. Our research on these issues results in a novel framework RAP with the following main contributions:

  • [leftmargin=*]

  • To the best of our knowledge, this is the first effort in proposing a recommendation system with guarding against the inference of private attribute information while maintaining the user utility.

  • The proposed RAP model uses an attacker component that regularizes the recommendation process to protect users against private-attribute inference attack.

  • The proposed RAP model is a general framework for recommendation systems. Both of the integrated Bayesian personalized recommender and the private-attribute attacker can be easily replaced by different models designed for specific tasks.

  • We conduct experiments on real-world data to demonstrate the effectiveness of RAP. Our empirical results show that RAP preserves user utility and privacy. The results demonstrates that RAP outperforms the state-of-the-art related work and enables an adjustable balance between private-attribute protection and personalized recommendation.

2. Problem Statement

Before formally defining our problem, we first describe the notations used in this paper. Let denotes items, and denotes users. Also, represents the set of items rated by user , and is set of items recommended to . denotes a set of private attributes (e.g., age, gender). represents user-item rating matrix. The goal is to recommend products to people that would be interesting for them. However, we want to protect people’s privacy against a malicious adversary who attempts to infer their private attribute information according to the user’s list of items information. Items list for each user is union of his previously rated and newly recommended items, i.e., . In particular, the malicious attacker has a framework which takes a target user’s interactions and infers the user’s private attribute:

Problem 1 ().

We aim to learn a function that can recommend interesting and relevant products to each user such that, 1) the adversary cannot infer the targeted user’s private attribute information from the user’s list of items information, and 2) the set of recommended items is interesting for the user. The problem can be formally defined as:

Note that, the goal is to protect users against a malicious adversary who has access to the users’ items list, but not against the recommender which is trusted.

3. Related Work

Figure 1. The architecture of Recommendation with Protection (RAP) with two components: a Bayesian personalized recommender and a private-attribute inference attacker.

Explosive growth of the Web has raised numerous challenges for online users including disinformation spread (Alvari et al., 2018; Alvari and Shakarian, 2019; Alvari et al., 2019b; Alvari et al., 2019a) and threats to users’ privacy (Beigi and Liu, 2019, 2018a). Addressing user privacy issues has been studied from different aspects such as textual information (Beigi et al., 2019c, b), web browsing histories (Beigi et al., 2019a), private-attributes disclosure (Beigi et al., 2019c; Jia and NZhenqiang, 2018) and recommendation systems (Zhu and Sun, 2016; Luo and Chen, 2014) (for a comprehensive survey refer to (Beigi and Liu, 2018a)). Our work is related to a number of research which we discuss below while we elaborate on the differences between our work and them.

Privacy and Recommendation Systems. Existing privacy preserving works in recommendation systems focus on protecting users against re-identification attacks in which an adversary tries to infer a targeted user’s actual ratings and investigate if the target is in the database. They could be categorized into differential privacy based (McSherry and Mironov, 2009; Jorgensen and Yu, 2014; Hua et al., 2015; Zhu and Sun, 2016) and perturbation based (Rebollo-Monedero et al., 2011; Polat and Du, 2003; Luo and Chen, 2014) approaches. Some methods utilize differential privacy strategy (Dwork, 2008)

to modify the answers of the recommendation algorithm so the the presence of a user’s data (either a single user-item rating or entire user’s history) is masked by increasing the chance that two arbitrary records have close probabilities to generate the same noisy data. McSherry et al. 

(McSherry and Mironov, 2009) utilize differential privacy to construct private covariance matrices to be further used by recommender. Another work (Jorgensen and Yu, 2014) clusters users w.r.t. the social relations and generates differentially private average of users’ preferences in each cluster. Hua et al. (Hua et al., 2015)

propose a private matrix factorization which adds noise to item vectors to make them differentially private. Bassily et al. 

(Bassily and Smith, 2015) modify user-item ratings data to satisfy differential privacy and then share it with recommender. Another work (Zhu and Sun, 2016) makes items list differentially private and then sends it to recommender. Perturbation based techniques obfuscate user’s interactions history by adding fake items and ratings to it. Rebollo et al. (Rebollo-Monedero et al., 2011) propose an information theoretic based privacy metric and then find the obfuscation rate for generating forged user profiles so that the privacy risk is minimized. Similarly, (Parra-Arnau et al., 2014) proposes to add or remove items and ratings from user profiles minimize privacy risk. Polat et al. (Polat and Du, 2003)

use a randomized perturbation technique by sharing disguised z-score for items a given user have rated. In another work 

(Luo and Chen, 2014)

, similar users are grouped to each other. Aggregated ratings of the users within the same group is then used to estimate a group preference vector. Similar to 

(Polat and Du, 2003), randomness is then added to the preference vector to be shared with the recommender.

Attribute Inference Attacks and Defenses Private-attribute inference attack focuses on inferring users’ private attribute information from their publicly available information. These attacks could be categorized into three groups.A group of these attacks leverages a target user’s friends’ information (He et al., 2006; Lindamood et al., 2009; Gong et al., 2014) and community membership information (Zheleva and Getoor, 2009; Mislove et al., 2010) to infer target’s private attributes. Second group of these attacks are those works which leverage users’ behavioral information such as movie-rating behavior (Weinsberg et al., 2012) and Facebook likes (Kosinski et al., 2013) to infer their private attribute information. The third group of works exploits both friend and behavioral information (Gong and Liu, 2016, 2018; Jia et al., 2017). Gong et al. (Gong and Liu, 2016, 2018) make a social-behavior-attribute network in which all users’ behavioral and friendship information is integrated in a unified framework. Private attributes are then inferred through a vote distribution attack model. Another work (Jia et al., 2017) incorporates structural and behavioral information from users who do not have the attribute in the training process, i.e. negative training samples.

Little work focuses on protecting users against private-attribute inference attacks (Weinsberg et al., 2012; Jia and NZhenqiang, 2018). In (Weinsberg et al., 2012), a predefined number of dummy items is added to each user’s profile which are negatively correlated with his actual attributes before publishing anonymized user-item ratings data. In a recent paper (Jia and NZhenqiang, 2018)

, after a value is sampled for the given private attribute w.r.t. a certain probability distribution which is different from the user’s actual attribute, the minimum noise is found and added to the user-item data via adapting evasion attacks such that the malicious attacker predicts the sampled attribute value as the user’s private attributes.

Our work is different from the existing works. First, existing privacy preserving recommendation systems do not specifically target the private-attribute inference attacks. Second, existing defenses against this attack (Weinsberg et al., 2012; Jia and NZhenqiang, 2018) address the utility loss by minimizing the amount of changes made to the data. However, in scope of recommendation systems, this approach can mean neglecting the quality of received services, i.e., poor recommendation results. Third, sharing anonymized data with recommender does not preclude the malicious attacker to infer private attribute information when users receive new recommendations. All of these limitations arises the need for having a recommendation system guarding against the inference of private attribute while maintaining the user utility.

4. Recommendation with Attribute Protection (RAP)

Our proposed recommendation framework, RAP, aims to concurrently recommend interesting items to users and protect them against private attribute leakage. The entire model is illustrated in Figure. 1. This framework consists of two major components, 1) a Bayesian personalized recommender, and 2) a private-attribute inference attacker. The personalized ranking recommender aims to extract users’ actual preferences and recommend relevant items to them. The private-attribute inference attacker seeks to develop a model which can deduce users’ private information w.r.t. the existing recommendation system. Recommendation component then utilizes to guide the recommendation process by ensuring that the union of previously rated and newly recommended items does not leak user’s attributes and further fools the adversary in

. Inspired by adversarial machine learning, we model this objective as a min-max game between two components, i.e. attacker

seeks to maximize its gain and recommender aims to minimize both its recommendation loss and attacker ’s gain. The final output of RAP for each user, is a list of top- items which are interesting yet safe for them.

4.1. Bayesian Personalized Recommendation

In this section, we propose a new Bayesian personalized recommendation model. The proposed model structure is shown in Fig. 2. This model first extracts users and items latent embeddings and then utilizes learning to rank approach to recommend items to users.

Learning to rank methods have been introduced to optimize recommendation systems toward personalized ranking. Inspired by recent success of Bayesian Personalized Ranking (BPR) (Rendle et al., 2009) in image and friend recommendation systems (Niu et al., 2018; Ding et al., 2017)

, we choose BPR aver other approaches. The idea behind BPR is that observed user-item interactions should be ranked higher than unobserved ones. Learning from implicit feedback, BPR goal is to maximize the margin between an observed user-item interaction and its unobserved counterparts. In particular, BPR behavior could be interpreted as a classifier in which given a positive triplet instance of user

and items and , , it determines whether the user-item interaction should have a higher rank score than .

Figure 2. Overview of the Bayesian personalized recommendation component.

This recomemndation component has three inputs, the user and items and .We denote the user and items indices by a tuple of vectors

which are one-hot encodings of users and items. Since there are

users and items, the dimensions of , , and are , and , respectively. Following the input layer, each input layer is fully connected to the corresponding embedding layer to learn the latent representation of the users and items, , , where is the number of dimensions. The embedding dimensions for both users and items are the same:

(1)

where , and are embedding matrices for users and items. In the next layer, user and item embedding vectors are passed to the hidden layers , , and for further calculations. For example, the hidden layer produces for user as:

(2)

where is simply defined as and and are the weights and bias for units, respectively.

Using , , and , the next layer produces the user’s preference , toward items and , respectively. For example:

(3)

where

is the bias parameter in the output layer. The activation function is

function and represents concatenation. Note that due to the model simplicity, all users share the same latent representation learning parameters and in the hidden layer and output layer, respectively.

We use BPR to learn how to rank in the problem of recommendation. The final objective function is to minimize the following loss function w.r.t.

:

(4)

where, is the ground truth value for our model training:

(5)

where set also denotes the training pairwise instances in which and represent the whole set of items and the set of items rated by user , respectively. Moreover, is the actual rating that user gives to item . is also defined as such that and represent the set of embedding matrices for users and items, respectively. The proposed model considers the recommendation problem as a binary classification problem to ensure that the pairwise preference relations hold.

After training the recommendation model, given a user , for every item that the user has not rated, i.e., , his preference score is predicted by the recommender. In order to calculate the preference score , we pass the tuple to the recommender, and get and as the model’s output. The final preference score of user toward item is calculated as . All of the unrated items will be then sorted based on their preference scores descendingly and the top- items are then returned as the recommendation to the user.

4.2. Training an Attacker against Inferring Private Attribute Information

Figure 3. Overview of the private-attribute inference attacker component for one attribute.

The goal of our model is to recommend ranked items to users such that any potential adversary cannot infer users’ private attribute information such as age, gender and occupation. However, a challenge is that the recommendation system does not know the malicious attacker’s model. To address this challenge, we add a private-attribute inference attacker component to our model which seeks to learn a classifier that can accurately identify the private information of users from their previous interactions. Then, we leverage this component to regularize the recommendation process by incorporating necessary constraints in order to fool the adversary and further avoid the leakage of private attributes after recommendation. This part is discussed in details in Section. 4.3.

The goal of the private-attribute attacker is now to predict target user ’s private attribute information by leveraging the information of his latent representation as well as the the latent representation of his items list. The user ’s items list includes both items that user has rated previously and new recommended items . Given private attributes (e.g., age, gender), the set of represents all the parameters included in the private-attribute inference attacker component . The output of the private attribute attacker component for user w.r.t. -th private attribute is the probability that user has -th attribute.

We use to represent the actual value for user ’s -th private attribute. The structure of private attribute inference attacker is represented in Fig.3. The input to this model for each user is the latent embedding representations of each item in his items list , and ’s latent embedding representation

. Given the input, the items embeddings are passed to a single-layer recurrent neural network (RNN) and the output of RNN (

) is then concatenated with user’s embedding. The last layer produces the predicted -th sensitive attribute for user , :

(6)

where represents concatenation. Also, and are the weights and bias for units, respectively and are shared among all users due to the model simplicity. We then minimize the private-attribute inference attacker component loss function for all private attributes by seeking the optimal parameters . The objective function for all users can be formally written as follows:

(7)

where denotes the cross entropy loss for -th private attribute.

4.3. Adversarial Learning for Recommendation with Private-Attribute Protection

Thus far, we have discussed how we 1) learn users and items representations to recommend ranked items to each user based on his personalized preferences; and 2) train an attacker which can accurately infer a target user’s private attribute information given a list of his rated items and received recommendations. We stress that the adversary always has the upper hand and adapts his private-attribute inference attack in order to minimize his inference loss w.r.t. the existing recommendation system. The final objective is thus to recommend relevant ranked items to users such that a potential adversary cannot infer their private attribute information. To achieve two goals together, we design an optimization problem to minimize the recommendation loss of our model and maximize the inference loss of a determined attacker who adaptively minimizes his loss. Inspired by the idea of adversarial learning, we model this optimization as a min-max game between two components, Bayesian personalized recommender and private-attribute attacker.

In our proposed model, the adversary tries to adapt itself and gets the maximum gain, while the recommendation system seeks to recommend ranked items to users. The recommended items not only align well with the users’ preferences, but also minimize the adversary’s gain. We reformulate the objective function of the recommendation system as minimizing attacker’s gain and recommendation loss simultaneously:

(8)

The inner part learns the most determined adversary which adaptively minimizes its loss regarding private-attribute inference given the users and items information. The outer part seeks to both minimize the recommendation loss and fool the given adversary. The parameter controls the contribution of the private-attribute inference attacker in the learning process. Objective function in Eq. 8 can be written as follows:

(9)

where is the set of all parameters to be learned, is the norm regularizer on the parameters, and is a scalar to control the contribution of the regularization .

http://home.engineering.iastate.edu/ neilgong/gplus.html

4.4. Optimization Algorithm

The optimization process is illustrated in Algorithm 1. First, we create a mini-batch sample of users from the training data and serve their private attribute and item-rating information to the model. Next, we train the Bayesian personalized recommender according to the Eq. 10 w.r.t. in Line 3. Then, for each user in we calculate the top- recommended items and accordingly make his list of items, . The private-attribute inference attacker component is then trained according to the users and item embeddings information using Eq. 7 in Line 5. After training RAP, for each user , a list of top- items will be returned as recommendation.

0:     Items set , training user data , training user-item matrix data , batch size , , , , and .
0:     Trained recommendation with protection RAP.
1:  repeat
2:     Create a mini-batch of users with their private-attribute and item-rating information from
3:     Train the recommendation with attribute protection via Eq. 10 w.r.t.
4:     For each user in , calculate the top- recommended items
5:     Train the private-attribute inference attacker (i.e., ) via Eq. 7 given the users’ information including their list of items information, i.e.,
6:  until Convergence
Algorithm 1 The Learning Process of RAP model

5. Experiments

In this section we conduct experiments to evaluate the efficiency of the proposed framework in terms of both privacy and quality of the recommendation. We aim to answer the following questions:

  • [leftmargin=*]

  • Q1 - Privacy: How does RAP perform in preventing leakage of users’ private information?

  • Q2 - Utility: How does RAP perform in recommending relevant items to users?

  • Q3 - Utility-Privacy Relation: Does the improvement in privacy result in sacrificing the utility of recommendation system?

To answer the first question (Q1), we examine our model against different private information with different distributions, such as age, gender, and occupation. Then, we evaluate the effectiveness of RAP in preventing leakage of users’ private information given union of users’ previously rated and newly recommended items. Addressing leakage of private attribute information may result in recommendation performance deterioration. Therefore, to answer the second question (Q2), we examine the performance of RAP in terms of the quality of the recommendation. Finally, to answer the third question (Q3), we investigate the loss in recommendation performance when enhancing privacy of users.

5.1. Data

We use publicly available data MovieLens (Harper and Konstan, 2016). This dataset includes ratings by 943 users on 1,682 movies. Each user has rated at least 20 movies and the rating scores are between 1 and 5. In the collected dataset, each user is associated with three private attributes, gender (male/female), age, and occupation. For this paper, we follow the setting of (Hovy and Søgaard, 2015) and categorize age attribute into three groups, over-45, under-35, and between 35 and 45. In total, 21 possible occupations have been considered for this data. The average number of rated items for each user is 129.

5.2. Experimental Setting

Here, we first explain how we design experiments to evaluate utility and privacy. Then, we discuss evaluation metrics and baselines.

Implementation Details: The parameters for recommendation and attacker components are determined through grid search. For the Bayesian personalized ranking recommendation component, we set the dimension of first layer as . Accordingly, size of user and item embedding vectors is . The dimension of hidden layer is also set as . For the private-attribute inference attacker component, we use single layer RNN with the dimension of input layer set as . User and item embeddings are then passed from recommendation component to the attacker component. The dimension of hidden layer is set as . The parameters and are also determined through cross-validation, and .

We initialize the weight matrices in both components with random values uniformly distributed in

. The error gradient is back propagated from output to input and parameters in each layer are updated. The optimization algorithm used for gradient update is Adam’s algorithm (Kingma and Ba, 2014)

. The loss generally converges after 20 epochs. The batch size we use in experiments is

.

Recommendation Evaluation: We evaluate the performance of recommendation by examining the quality of recommended items for all users. We follow the setting of (Jia and NZhenqiang, 2018) to set-up the experimental settings. To do so, we split the data for train and test as follows. For each user in the data, we randomly select rated items for test set and the remaining items for training set, where is the number of rated items for user . We set the item rating for those in the test set as zero. We vary the value of as . Then, the top- items are then returned to each user as the recommendation. Note that we assume RAP has access to the users’ private attribute information during the training process.

Private-Attribute Evaluation: We evaluate privacy of users in terms of their robustness against the malicious attribute inference attacks in which the adversary’s goal is to infer users’ private attributes. In particular, the malicious attacker learns a multi-class classifier which takes a target user list of items information, i.e. , where is set of ’s rated items and is set of items recommended to . The adversary then infers the user’s private attributes, i.e., gender, age, and occupation.

We use a Neural Network (NN) model as the adversary’s classifier. Note that RAP is not aware of the adversary’s model. In this attack, the adversary deploys a feed-forward network with a single hidden layer to perform the attack. The input to this model is one-hot encoding of each user, . Since there are items in the dataset, the dimension of input vector is . The input layer is then fully connected to the hidden layer with dimension of hidden state set as and a layer used as the output layer. The dimension of the hidden layer is determined through grid search. We note that Gong et al. (NZhenqiang and Liu, 2016) also proposed an attribute inference attack which leverages both social friends and rating behavior. However, their attack is not applicable to our problem as we focus on leveraging only user-item rating information.

We follow the setting of (Jia and NZhenqiang, 2018) to set-up the experiments. We split the data to train and test sets by sampling of the users in the dataset uniformly at random as the training set and use the remaining users as testing set. We assume that the users in the training set has publicly disclosed their private information while the users in the testing set keep those attribute information private. Then, for each user in the test set, we randomly select rated items and remove them from the user’s rating history by setting the their rating as zero. We keep the user-item ratings for users in the training set intact (i.e., original user-item ratings). Trained RAP model is deployed on the users in the test set and top- recommended items are added to the users’ previously rated items , in order to make . We vary value of as .

The adversary’s classifier is trained on the training set and evaluated on the users in the test set. Note that we assume that the malicious attacker knows the original intact user-item interactions for those users in the training set and seeks to predict private attribute information of the users in the test set, given their . We evaluate a malicious attack for each private attribute.

Model test items ()
35 40 45
Gen Age Occ Gen Age Occ Gen Age Occ
Original 0.7662 0.7050 0.8332 0.156 0.156 0.7662 0.7050 0.8332 0.151 0.172 0.7662 0.7050 0.8332 0.145 0.187
LDP-SH 0.6587 0.6875 0.8076 0.071 0.071 0.6440 0.6777 0.7954 0.062 0.078 0.6398 0.6732 0.7817 0.055 0.081
BlurMe 0.6266 0.6177 0.7614 0.118 0.118 0.6013 0.5949 0.7589 0.109 0.134 0.5884 0.5901 0.7522 0.099 0.150
RAP 0.6039 0.5397 0.7319 0.152 0.152 0.5714 0.5270 0.7315 0.147 0.168 0.5278 0.5262 0.7312 0.142 0.183
Table 1. RAP Performance. Higher and values show higher utility, while lower AUC indicates higher privacy.

Evaluation Metrics: We use the following metrics for evaluating RAP performance w.r.t. malicious private-attribute inference (i.e., privacy) and product recommendation (i.e., utility):

  • [leftmargin=*]

  • Private-Attribute Evaluation: Since distribution of data for different private attribute values is imbalance, we report micro-AUC (Fawcett, 2006) of the adversary’s classifier. Micro-AUC (Fawcett, 2006) gives a more accurate assessment. Lower AUC demonstrates higher privacy in terms of obscuring private attributes.

  • Recommendation Evaluation: We use standard metrics that are widely used in other related works (Ziegler et al., 2005), i.e., and .

    : represents the ratio of test cases which has been successfully recommended in a top- position in a ranking list to value of . For each user, we measure as:

    (10)

    : defines the ratio of top- recommended items which are in the test set to the number of items to be recommended in the test. For each user in the data, we measure as follows:

    (11)

    We then report the average of and for all users in the dataset and set the number of returned items as .

Baseline Methods: We compare RAP with the following baselines:

  • [leftmargin=*]

  • Original: This baseline is a variant of RAP which recommends items for each user without incorporating the private-attribute inference attacker component, i.e., .

  • LDP-SH (Bassily and Smith, 2015): This method adds noise to user-item ratings based on -differential privacy. It requires categorical data which for our case, each user-item rating can be viewed as categorical data taking values .

  • BlurMe (Weinsberg et al., 2012): This method perturbs user-item ratings before sending to recommendation system. It adds new items to each user’s profile that are negatively correlated with the user’s actual private attributes and then adds the average rating score to those items. BlurMe needs to be deployed for each attribute separately.

To have a fair comparison between RAP and two baselines, we anonymze the user-item rating data w.r.t. baselines. The noisy manipulated data is used to train the recommendation model. We use matrix factorization model as the recommendation framework for both baselines. The discussed procedure is then used for evaluation.

5.3. Privacy Analysis (Q1)

The results against the malicious private-attribute inference attack (Section 5.2) are demonstrated in Table. 1. We observe that increasing the number of test items () results in decrease of AUC score for all frameworks. This is because for each target user in the test set, recommended items have been added to user’s item list . Therefore, increase of can decrease the malicious attacker’s chance for correctly inferring users’ private attribute information. Moreover, RAP has significantly lower AUC score in comparison to Original for all three private attributes and thus outperforms Original in terms of obscuring users’ private attribute information. RAP also has significantly better performance in hiding private information in comparison to LDB-SH. The reason is that LDB-SH aims to achieve a privacy goal that is different from preventing leakage of private information. This confirms that although adding noise and satisfying -differential privacy can indirectly benefit private attribute leakage, it does not directly target this problem. These results show the importance of private-attribute inference attacker component in obfuscating private information. We also observe that RAP hides more private information rather than BlurMe (lower AUC score). This demonstrates that providing obfuscated user-item rating data to the recommendation system, does not necessarily guarantee preventing future private attribute leakage when user receives (and accordingly buy) more recommended products. Moreover, BlurMe needs to be deployed for each private attribute separately while RAP considers three private attributes all together.

These results confirm the efficiency of RAP in obscuring users’ private attribute information and demonstrate that despite the fact that RAP is not aware of the adversary’s inference model, it is prepared against the malicious attacker.

Model test items ()
35 40 45
Gen Age Occ Gen Age Occ Gen Age Occ
RAP 0.6039 0.5397 0.7319 0.152 0.152 0.5714 0.5270 0.7315 0.147 0.168 0.5278 0.5262 0.7312 0.142 0.183
RAPAge 0.6450 0.5948 0.7528 0.150 0.150 0.5489 0.5938 0.7522 0.146 0.167 0.5475 0.5909 0.7497 0.141 0.182
RAPGen 0.5332 0.6789 0.7558 0.151 0.151 0.5298 0.6614 0.7556 0.145 0.166 0.5211 0.6415 0.7555 0.141 0.181
RAPOcc 0.6571 0.6949 0.7468 0.147 0.147 0.6485 0.6871 0.7466 0.141 0.161 0.6454 0.6853 0.7438 0.135 0.174
Table 2. Impact of different private-attribute attacker components on RAP in terms of utility and privacy.
(a) Attribute Age
(b) Attribute Gender
(c) Attribute Occupation
(d) Recommendation
Figure 4. Performance results for private-attribute inference attack and recommendation task for different values of

5.4. Utility Analysis (Q2)

The results for recommendation task for different methods and different number of test items () are shown in Table. 1. We observe that increasing the number of test items () results in increasing and decreasing for all methods. Note that the higher the and score values are, the higher recommendation quality is. Another observation is that LDP-SH has the worst performance amongst all methods, i.e., lowest and scores. This is because of the way LDP-SH adds noise to the user data without considering the quality of recommendation service in practice which can result in degraded recommendation results. BlurMe has also lower performance than RAP as it neglects quality of recommendation results. These results confirm the effectiveness of Bayesian personalized recommendation component which helps RAP to take the utility into consideration in practice. Moreover, quality of recommendation results for RAP method is comparable to the Original approach. This means that RAP can accurately capture users’ actual preferences and interests (i.e., high utility).

The results confirm the effectiveness of RAP in understanding users’ actual preferences and recommending ranked relevant products that are interesting yet safe products to users.

5.5. Utility-Privacy Relation (Q3)

We compare the privacy and utility results in Table. 1 for all methods. We observe that LDP-SH has the worst results in terms of both preserving privacy and recommendation performance. Another observation is that BlurMe improves privacy compared to the Original method, but it loses utility in terms of recommendation system performance. This is in contrast with the results of RAP, which has outperformed BlurMe and LDP-SH in terms of recommendation and has comparable results with Original. RAP has also achieved the lowest AUC score and therefore highest privacy among all other methods. Comparing RAP with other methods confirms that approaching utility loss by minimizing the amount of data changes results in loss of quality of recommendation system in practice. This is reflected as degraded recommendation results for baseline approaches. Moreover, these results confirm the effectiveness of Bayesian personalized recommendation component in RAP, which helps us to consider quality of recommendation in practice. Results also demonstrate the complementary roles of both recommendation and private attribute components which guide each other through both privacy and utility issues. This results in a privacy-aware recommendation system which is prepared for private attribute inference attack and understands users’ preferences.

5.6. Impact of Different Components

Here, we investigate the impact of different private attribute components on obscuring users’ private information. We define three variants of our proposed framework, i.e., RAPAge, RAPGen, and RAPOcc. In each of these variants, the model is trained with the corresponding private-attribute inference attacker component, e.g. RAPAge is trained solely with age inference attacker component and does not utilize any other private-attribute attackers during training phase. Results are shown in Table. 2. We observe that for gender attribute, RAPGen has the best performance in terms of obscuring gender attribute comparing to the other approaches (i.e., lowest AUC score). This is in contrast to quality of RAPGen performance for recommendation task which is lower than original proposed model RAP. For other private attributes, RAP still outperforms RAPOcc and RAPAge in terms of obscuring age and occupation attributes. Moreover, results show that using one private-attribute attacker compromises the effectiveness model for obfuscating other private attributes. For the recommendation task, we surprisingly observe that using solely one of the private-attribute attackers in training process can result in performance reduction in comparison to RAP in terms of and . This means that focusing merely on obscuring one private attribute can result in more recommendation performance degradation.

5.7. Probing Further

RAP has one important parameter which controls the contribution from private-attribute attacker component. In this section, we probe further to investigate the effect of this parameter by varying it as . For this experiment, we set the number of test items . We also set the number of top- returned items as for calculating . Note that and are equal in this scenario as . Results are shown in the Fig. 4.

Although controls the contribution of private-attribute inference attacker component, we surprisingly observe that with the increase of , the AUC score for attribute inference attack decreases at first up to the point that and then it increases. This means that private information were obscured more accurately at the beginning with the increase of and less later. Moreover, with the increase of , the performance of recommendation task decreases, i.e., lower . This shows that increasing the contribution of private-attribute attacker component leads to decrease in the quality of recommendation framework. Another observation is that setting leads to improvement in hiding private attribute information in comparison to the results of using Original (or when ). This result shows the importance of the RAP’s private-attribute attacker component in preserving privacy of users. Another observation is that after , continuously increasing increases the AUC for malicious private-attribute inference attack, i.e., degrades the performance of hiding private information. The reason is that the model could overfit by increasing the value of and lead to an inaccurate estimation of privacy protection.

6. Conclusion

In this paper, we propose an adversarial learning-based recommendation with attribute protection model, RAP, which guards users against private-attribute inference attack while maintaining utility. RAP recommends interesting yet safe products to users such that a malicious attacker cannot infer their private attribute from users’ interactions history and recommendations. RAP has two main components, Bayesian personalized recommender, and private-attribute inference attacker. Our empirical results show the effectiveness of RAP in both protecting users against private-attribute inference attacks and preserving quality of recommendation results. RAP also consistently achieves better performance compared to the state-of-the-art related work. One extension to this work is to study the possibility of extending differential privacy mechanism for this type of attack in recommender systems. It would be also interesting to investigate personalized utility-privacy trade-off by tweaking framework parameters to fit the specific needs of individuals.

Acknowledgements.
This material is based upon the work supported, in part, by NSF #1614576, ARO W911NF-15-1-0328 and ONR N00014-17-1-2605.

References

  • (1)
  • Alvari et al. (2019a) Hamidreza Alvari, Soumajyoti Sarkar, and Paulo Shakarian. 2019a. Detection of Violent Extremists in Social Media. In 2019 2nd International Conference on Data Intelligence and Security (ICDIS). 43–47.
  • Alvari et al. (2019b) Hamidreza Alvari, Elham Shaabani, Soumajyoti Sarkar, Ghazaleh Beigi, and Paulo Shakarian. 2019b. Less is More: Semi-Supervised Causal Inference for Detecting Pathogenic Users in Social Media. In Companion Proceedings of The 2019 World Wide Web Conference. ACM, 154–161.
  • Alvari et al. (2018) Hamidreza Alvari, Elham Shaabani, and Paulo Shakarian. 2018. Early identification of pathogenic social media accounts. In 2018 IEEE International Conference on Intelligence and Security Informatics (ISI). IEEE, 169–174.
  • Alvari and Shakarian (2019) Hamidreza Alvari and Paulo Shakarian. 2019. Hawkes Process for Understanding the Influence of Pathogenic Social Media Accounts. In 2019 2nd International Conference on Data Intelligence and Security (ICDIS). 36–42.
  • Bassily and Smith (2015) Raef Bassily and Adam Smith. 2015. Local, private, efficient protocols for succinct histograms. In

    Proceedings of the forty-seventh annual ACM symposium on Theory of computing

    . ACM, 127–135.
  • Beigi et al. (2019a) Ghazaleh Beigi, Ruocheng Guo, Alexander Nou, Yanchao Zhang, and Huan Liu. 2019a. Protecting user privacy: An approach for untraceable web browsing history and unambiguous user profiles. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. ACM, 213–221.
  • Beigi and Liu (2018a) Ghazaleh Beigi and Huan Liu. 2018a. Privacy in social media: Identification, mitigation and applications. arXiv preprint arXiv:1808.02191 (2018).
  • Beigi and Liu (2018b) Ghazaleh Beigi and Huan Liu. 2018b. Similar but different: Exploiting users’ congruity for recommendation systems. In International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation. Springer, 129–140.
  • Beigi and Liu (2019) Ghazaleh Beigi and Huan Liu. 2019. Identifying novel privacy issues of online users on social media platforms by Ghazaleh Beigi and Huan Liu with Martin Vesely as coordinator. ACM SIGWEB Newsletter Winter (2019), 4.
  • Beigi et al. (2019b) Ghazaleh Beigi, Kai Shu, Ruocheng Guo, Suhang Wang, and Huan Liu. 2019b. I Am Not What I Write: Privacy Preserving Text Representation Learning. arXiv preprint arXiv:1907.03189 (2019).
  • Beigi et al. (2019c) Ghazaleh Beigi, Kai Shu, Ruocheng Guo, Suhang Wang, and Huan Liu. 2019c. Privacy Preserving Text Representation Learning. In Proceedings of the 30th ACM Conference on Hypertext and Social Media. ACM, 275–276.
  • Calandrino et al. (2011) Joseph A Calandrino, Ann Kilzer, Arvind Narayanan, Edward W Felten, and Vitaly Shmatikov. 2011. ” You Might Also Like:” Privacy Risks of Collaborative Filtering. In Security and Privacy (SP), 2011 IEEE Symposium on. IEEE, 231–246.
  • Ding et al. (2017) Daizong Ding, Mi Zhang, Shao-Yuan Li, Jie Tang, Xiaotie Chen, and Zhi-Hua Zhou. 2017. BayDNN: Friend Recommendation with Bayesian Personalized Ranking Deep Neural Network. In Proceedings of the ACM CIKM.
  • Dwork (2008) Cynthia Dwork. 2008. Differential privacy: A survey of results. In International Conference on Theory and Applications of Models of Computation. Springer, 1–19.
  • Fawcett (2006) Tom Fawcett. 2006. An introduction to ROC analysis. Pattern recognition letters 27, 8 (2006), 861–874.
  • Gong and Liu (2016) Neil Zhenqiang Gong and Bin Liu. 2016. You Are Who You Know and How You Behave: Attribute Inference Attacks via Users’ Social Friends and Behaviors.. In USENIX Security Symposium. 979–995.
  • Gong and Liu (2018) Neil Zhenqiang Gong and Bin Liu. 2018. Attribute Inference Attacks in Online Social Networks. ACM Transactions on Privacy and Security (TOPS) 21, 1 (2018).
  • Gong et al. (2014) Neil Zhenqiang Gong, Ameet Talwalkar, Lester Mackey, Ling Huang, Eui Chul Richard Shin, Emil Stefanov, Elaine Runting Shi, and Dawn Song. 2014. Joint link prediction and attribute inference using a social-attribute network. ACM Transactions on Intelligent Systems and Technology (TIST) 5, 2 (2014).
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672–2680.
  • Harper and Konstan (2016) F Maxwell Harper and Joseph A Konstan. 2016. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) 5, 4 (2016).
  • He et al. (2006) Jianming He, Wesley W Chu, and Zhenyu Victor Liu. 2006. Inferring privacy information from social networks. In International Conference on Intelligence and Security Informatics. Springer, 154–165.
  • Hovy and Søgaard (2015) Dirk Hovy and Anders Søgaard. 2015. Tagging performance correlates with author age. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, Vol. 2. 483–488.
  • Hua et al. (2015) Jingyu Hua, Chang Xia, and Sheng Zhong. 2015. Differentially Private Matrix Factorization.. In IJCAI. 1763–1770.
  • Jia and NZhenqiang (2018) J Jia and Gong NZhenqiang. 2018. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. In 27th USENIX Security Symposium (USENIX Security 18). USENIX Association.
  • Jia et al. (2017) Jinyuan Jia, Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. 2017. AttriInfer: Inferring user attributes in online social networks using markov random fields. In Proceedings of the WWW. 1561–1569.
  • Jorgensen and Yu (2014) Zach Jorgensen and Ting Yu. 2014. A Privacy-Preserving Framework for Personalized, Social Recommendations. EDBT 582.
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  • Konstan and Riedl (2012) Joseph A Konstan and John Riedl. 2012. Recommender systems: from algorithms to user experience. User modeling and user-adapted interaction 22, 1-2 (2012).
  • Koren (2009) Yehuda Koren. 2009. Collaborative filtering with temporal dynamics. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 447–456.
  • Kosinski et al. (2013) Michal Kosinski, David Stillwell, and Thore Graepel. 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences 110, 15 (2013), 5802–5805.
  • Lindamood et al. (2009) Jack Lindamood, Raymond Heatherly, Murat Kantarcioglu, and Bhavani Thuraisingham. 2009. Inferring private information using social network data. In Proceedings of WWW. ACM, 1145–1146.
  • Luo and Chen (2014) Zhifeng Luo and Zhanli Chen. 2014. A privacy preserving group recommender based on cooperative perturbation. In International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery. IEEE.
  • McSherry and Mironov (2009) Frank McSherry and Ilya Mironov. 2009. Differentially private recommender systems: building privacy into the net. In Proceedings of SIGKDD. ACM.
  • Mislove et al. (2010) Alan Mislove, Bimal Viswanath, Krishna P Gummadi, and Peter Druschel. 2010. You are who you know: inferring user profiles in online social networks. In Proceedings of WSDM. ACM, 251–260.
  • Niu et al. (2018) Wei Niu, James Caverlee, and Haokai Lu. 2018. Neural Personalized Ranking for Image Recommendation. In Proceedings of the 11th ACM WSDM.
  • NZhenqiang and Liu (2016) Gong NZhenqiang and B Liu. 2016. You Are Who You Know and How You Behave: Attribute Inference Attacks via Users’ Social Friends and Behaviors. In 25th USENIX Security Symposium (USENIX Security 16). USENIX Association.
  • Parra-Arnau et al. (2014) Javier Parra-Arnau, David Rebollo-Monedero, and Jordi Forné. 2014. Optimal forgery and suppression of ratings for privacy enhancement in recommendation systems. Entropy 16, 3 (2014), 1586–1631.
  • Polat and Du (2003) Huseyin Polat and Wenliang Du. 2003. Privacy-preserving collaborative filtering using randomized perturbation techniques. In International Conference on Data Mining. IEEE.
  • Ramakrishnan et al. (2001) Naren Ramakrishnan, Benjamin J Keller, Batul J Mirza, Ananth Y Grama, and George Karypis. 2001. Privacy risks in recommender systems. IEEE Internet Computing 6 (2001), 54–62.
  • Rashid et al. (2002) Al Mamunur Rashid, Istvan Albert, Dan Cosley, Shyong K Lam, Sean M McNee, Joseph A Konstan, and John Riedl. 2002. Getting to know you: learning new user preferences in recommender systems. In Proceedings of the 7th international conference on Intelligent user interfaces. ACM, 127–134.
  • Rebollo-Monedero et al. (2011) David Rebollo-Monedero, Javier Parra-Arnau, and Jordi Forné. 2011. An information-theoretic privacy criterion for query forgery in information retrieval. In International Conference on Security Technology. Springer, 146–154.
  • Rendle et al. (2009) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In

    Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence

    . AUAI Press.
  • Weinsberg et al. (2012) Udi Weinsberg, Smriti Bhagat, Stratis Ioannidis, and Nina Taft. 2012. BlurMe: Inferring and obfuscating user gender based on ratings. In Proceedings of the sixth ACM conference on Recommender systems. ACM, 195–202.
  • Zheleva and Getoor (2009) Elena Zheleva and Lise Getoor. 2009. To join or not to join: the illusion of privacy in social networks with mixed public and private user profiles. In Proceedings of the 18th international conference on World wide web. ACM, 531–540.
  • Zhu and Sun (2016) Xue Zhu and Yuqing Sun. 2016. Differential privacy for collaborative filtering recommender algorithm. In Proceedings of the 2016 ACM on International Workshop on Security And Privacy Analytics. ACM, 9–16.
  • Ziegler et al. (2005) Cai-Nicolas Ziegler, Sean M McNee, Joseph A Konstan, and Georg Lausen. 2005. Improving recommendation lists through topic diversification. In Proceedings of the 14th international conference on World Wide Web. ACM, 22–32.