Recommender systems play a significant role in various applications, such as e-commerce and movie recommendation. Matrix Factorization (MF) [koren2009matrix], as a typical Collaborative Filter (CF) method, has positioned itself as one of the effective means of generating recommendations and is widely adopted in real-world applications. Traditionally, for one company, it is essential to accumulate sufficient personal rating data to build a performant MF model. However, due to the sparse nature of user-item interactions, it can be hard for a single company to collect sufficient data to build an MF model. Moreover, recently enacted stringent laws and regulations such as General Data Protection Regulation (GDPR) [Albrecht2016HowTG] and California Consumer Privacy Act (CCPA) [ghosh2018you] stipulate rules on data sharing among companies and organizations, making collaboration between companies by sharing personal rating data illegal and impractical.
To tackle the challenge of protecting individual privacy and remitting the data shortage issue, federated learning (FL) [konevcny2016federated, mcmahan2016communication]
provides a promising way that enables different parties collaboratively build a machine learning model without exposing private data in each party. It addresses data silos and privacy problems together. In FL, data can be partitioned horizontally (example-partitioned) or vertically (feature-partitioned) into different parties. When records are not aligned between parties and the feature spaces among parties are heterogeneous, federated transfer learning can be adopted. The use of FL in recommender systems has been studied over different data distributions. For example,[chai2019secure] considers horizontally partitioned rating data among clients, which hold ratings of the same user-item interaction matrix. Federated multi-view MF is studied where participants hold item interaction data, item features, or user features [flanagan2020federated]. Each participant holds a part of model parameters, while some common parameters are shared among participants. Existing studies generally categorize horizontal and vertical federated recommender systems regarding on whether user alignment is required before FL [yang2019flbook, chai2019secure]. For example, participants sharing different users and the same set of items implies horizontal federated recommender systems. In our paper, we categorize different settings based on the partition of feature space, as shown in Fig. 2, which is consistent with other FL systems.
Although most existing studies of federated recommender systems adopt privacy preserving techniques, including homomorphic encryption (HE) [Paillier99public-keycryptosystems], secure multiparty computation (MPC) [DBLP:journals/iacr/MohasselZ17] and differential privacy (DP) [Dwork:2008:DPS:1791834.1791836]
to protect data privacy. There is little exploration of how data privacy can be breached in federated MF. It is shown that the gradient present in the global model is the potential to breach data privacy in horizontal FL for deep learning[Melis_2019]li2019quantification]. However, a comprehensive study of the privacy threats against the plaintext federated matrix factorization in different data partitions is still required.
Inspired by this research gap, we investigate the potential privacy risks in federated matrix factorization. Specifically, we classify the federated MF into horizontal, vertical federated MF as well as federated transfer MF based on the data partition approaches. We then demonstrate how private user preference data can be breached during FL training to honest and curious participants. Finally, we discuss how cryptographic techniques can be adopted to protect privacy. The contribution of this paper can be summarised as follows:
We identify and formulate three types of federated MF problems based on the way of data partition.
We demonstrate the privacy threats in each type of federated MF, by showing how the private preference data can be leaked to honest-but-curious participants.
We investigate privacy-preserving approaches to protect privacy in federated MF.
In the following parts of this paper, section 2 gives related works of privacy issues of MF method and federated MF. We give backgrounds of MF and security model in section 3. Then, section 4 gives privacy attacks to each type of federated MF. In section 5, we discuss privacy-preserving approaches. Finally, conclusions are drawn in section 6.
2 Related Work
Privacy risks in general recommender systems are studied in [Jeckmans2013PrivacyIR, Lam2006DoYT], which analysis the privacy concerns that happened in different phases and caused by different entities. However, the federated learning framework is not considered, which introduces parameter exchanges between participants and thus enlarges the attack surface. Though [chai2019secure] investigates privacy risks in horizontal federated MF and adopts HE to harness the privacy risks, it assumes honest-and-curious participants, and the proposed approach only defenses against an honest-but-curious server. We demonstrate that other curious clients can easily infer private training data.
Many works are exploring privacy-preserving techniques for federated recommender systems. [qi2020fedrec]
adopts local differential privacy to train a neural network in horizontal federated news recommendation.[flanagan2020federated] studies federated multi-view MF over heterogeneous data held by different participants. [chen2020practical] explores a two-tiered notion of privacy by introducing a set of public users. [canny2002collaborative]
proposes federated CF based on partial Singular Value Decomposition and adopts HE for model aggregation. Fully HE[kim2016efficient] as well as garbled circuit [nikolaenko2013privacy] is also investigated for privacy-preserving MF, where secure MF is conducted between the server and a crypt-service provider. [gao2019hhhfl, ju2020federated] are the first to investigate the feasibility of the FL framework to enable a distributed training of deep models from multiple heterogeneous datasets for brain-computer interface. Although these works propose various privacy-preserving approaches, they fail to investigate the potential privacy loss the intermediate transferred parameters can breach during FL training.
In this section, we introduce the matrix factorization method based on stochastic gradient descent, as well as the security model considered in this paper.
3.1 Matrix Factorization
We consider users rate a subset of items. For the set of users, and the set of items, the user/item pairs that generate the ratings are denoted by . The total number of ratings is . Finally, for , we denote by the rating generated by user for item . Matrix factorization uses a
dimensional vector to represent a user asand an item as , referred as a profile, and models the relevance of an item to a user as the inner product of their profiles. MF computes the user profiles and item profiles following the regularized mean squared error as follows:
for positive constants . the inner product is the predicted unobserved ratings .
Stochastic gradient descent (SGD) is widely applied to optimize the profiles and as follows:
where is the learning rate. and are the user profile matrix and item profile matrix with each row as a profile, and and can be computed as follows:
3.2 Security Model
We assume all participants as well as the server if there is any are honest-but-curious (a.k.a. semi-honest). An honest-but-curious participant follows the protocol honestly but tries to infer private information from the intermediate information it knows.
4 Federated MF and Privacy Threats
In this section, we discuss federated MF in three settings that differ in data partition. Then we investigate how privacy can be breached towards adversarial participants in each setting. For simplicity and without loss of generality, we consider FL systems consisting of two participants, and . Fig. 1 compares the data partition in each setting of federated MF discussed in this paper. We adopt a sparse representation of partitioned data in Fig.2, to demonstrate the nature of horizontal, vertical, and transfer federated learning. In the horizontal FL setting Fig. 2(a), participants share the same feature space. In the vertical FL setting Fig.2(c), participants hold heterogeneous feature space, and only holds the ratings. Whereas, in federated transfer MF Fig.2(b), participants share partial models (e.g., item profiles) for knowledge transfer.
4.1 Horizontal Federated MF
In horizontal federated MF, and share the same user-item interaction matrix (i.e., the same user and item feature space), as shown in Fig 1(a) and Fig. 2(a). Therefore, each participant holds the profiles of all users and items, and can locally compute gradient of the whole MF model. Only model aggregation requires communication between A and B [mcmahan2016communication]. In model aggregation, the global user profiles matrix is computed by , and item profiles matrix . can compute the gradient of following
where is the index of round. Since the user-item interaction matrix is sparse, that is, , which is much smaller than the number of potential ratings [nikolaenko2013privacy]. For one update in SGD, it is very likely to have one rating record for each item or user. Therefore, according to Equation 4, can easily find the pair and the corresponding , by checking the gradient and comparing to each of as well as comparing to each of . Then, further infer the private rating score by
This way, may complete inference attack and extracts raw private user preference data of from the plaintext global model in horizontal federated MF.
4.2 Vertical Federated MF
In vertical federated MF as shown in Fig. 1(b) and Fig. 2(c), holds the user-item interaction matrix, and holds some auxiliary data of users (or items). We adopt the model presented in  to leverage auxiliary data provided by in vertical federated MF. For each user , hold distinct factor vectors corresponds to each attribute. The user can thus be described through the set of user-associated attributes as . For vertical federated MF model, Equation 1 can be modified as follows:
where . is the auxiliary information of user . is the implicitly preferred item set, is ’s attributes (e.g., demographic info).
To conduct federated vertical MF, locally computes and sends to , and sends nothing to . Therefore, has no privacy leakage to , while leaks to . In such a setting, user ID leakage during the user alignment stage causes a major privacy threat.
4.3 Federated Transfer MF
Without loss of generality, in federated transfer MF, we assume and holds ratings given by different users on the same set of items, and tries to infer private data of . That is, and holds the item profiles matrix . holds , and holds for their users, respectively. To train the model, each participant locally conducts SGD to update local models. For model aggregation, only is aggregated by the participants. can learn the gradient of as follows:
As the interaction matrix is sparse for each round, it is reasonable to assume for a user . By collecting the gradients of several steps and assuming the does not change, which is reasonable, then the model is nearly converged, we can use some iterative methods such as Newton’s method to approximate the numeric value of , as shown in [chai2019secure]. After computing , the reconstructed rating score of can easily be computed as follows:
Thus, completes the inference attack and extracts user profiles and the corresponding ratings of from the plaintext global model in horizontal federated MF. However, as the users are not aligned between participants, the user ID information can not be inferred by .
It is worth noting that, although some works denote participants sharing the same set of items and a different set of users as horizontal federated recommender systems, and denote participants sharing the same set of users and a different set of items as vertical federated recommender systems, based on whether the users should be aligned before FL training. Such setting demonstrates the nature of federated transfer learning, where neither the global model is shared among participants, nor the feature space is fully partitioned without intersection. Privacy attacks on both settings are the same during FL training. Therefore, we denote both settings as federated transfer MF in this paper.
|Problem setting||Parameter partition in each party||Gradient computation||Resilience against inference attack|
|Horizontal FedMF||The whole model params||Locally||Weak|
|Vertical FedMF||Partial params without shared params||Collaboratively||P strong, P weak|
|Federated Transfer MF||Partial params with shared params||Locally||Medium|
demonstrates the comparison of three settings based on the way to update the model, the partition of model parameters, and the resilience of the FL system against inference attack. In horizontal federated MF, all participants share ratings from the same set of users and items. Therefore, each participant locally holds the whole user profiles matrix and item profiles matrix for local SGD. For federated transfer MF, participants only share the same set of users (or items), each participant locally holds its user (or item) profiles sub-matrix and the global item (or user) profiles matrix for local SGD. For vertical federated MF, one party holds rating data; the other holds auxiliary data, each party holds partial parameters with shared parameters such as user profiles matrix. For both horizontal federated MF and Federated transfer MF, clients can locally conduct SGD optimization without the need for communication. Participants only need to exchange parameters during the model aggregation process. For vertical federated MF, two participants need to collaboratively compute the estimated rating for each update, which dramatically increases the communication cost. The resilience of each setting against the inference attack is also shown. Horizontal federated MF breaches most private information, including user ID and user preference data. For vertical federated MF, recommenderleaks no information to data provider , and data provider sends the intermediate data to the recommender. The user ID is breached for both participants. For federated transfer MF, only private rating data and user profiles are leaked, and no user ID is breached.
5 Privacy Preservation in Federated MF
According to the privacy threats investigated in section 4, we give some advises for privacy preservation in federated MF. For horizontal federated MF, the global user and item profile matrices computed by aggregation should be protected against each participant. For vertical federated MF, the auxiliary data provider should keeping its computed feature sent to the recommender secret. For federated transfer MF, the shared user or item profile matrix should be kept secret to any honest-but-curious participant throughout the FL training, as the rating score and private profile can be potentially implied.
To keep intermediate parameters private, there are mainly three types of approaches cryptography-based, obfuscation-based and hardware-based approaches. Cryptography-based approaches generally use HE and MPC to keep intermediate transactions private. Obfuscation-based approaches such as DP obfuscate private data by randomization, generalization or repression. Hardware-based approaches rely on trusted execution environment (TEE) to conduct FL learning in a trusted enclave. By using cryptography-based approaches, fully HE can be introduced to prevent decryption during training [kim2016efficient]. Secret sharing schemes can also be introduced following a two-server architecture [cryptoeprint:2011:535]. Since the user-item interaction matrix is sparse, applying DP may introduce too much noise and make the model unavailable. TEE can also be applied by encrypting private data and conducting private training inside TEE [CHEN202069].
We identify and formulate three types of federated MF problems based on the partition of feature space. Then, we demonstrate the privacy threats against each type of federated MF. We show how the private user preference data, private user/item profiles matrix, and user ID can be potentially leaked to honest-but-curious participants. Finally, We discuss privacy-preserving approaches to protect privacy in federated MF. For future work, we will experimentally study the power of the proposed privacy attacks by measuring the portion and accuracy of the inferred private data. Privacy threats against alternating least squares-based MF and other recommender systems also require further comprehensive study.