1 Introduction
Privacy protection in recommender systems is a notoriously challenging problem. There are often two competing goals at stake: similar users are likely to prefer similar products, movies, or locations, hence sharing of preferences between users is desirable. Yet, at the same time, this exacerbates the type of privacy sensitive queries, simply since we are now not looking for aggregate properties from a dataset (such as a classifier) but for properties and behavior of other users ‘just like’ this specific user. Such highly individualized behavioral patterns are shown to facilitate provably effective user deanonymization
[23, 36].Consider the case of a couple, both using the same location recommendation service. Since both spouses share much of the same location history, it is likely that they will receive similar recommendations, based on other users’ preferences similar to theirs. In this context sharing of information is desirable, as it improves overall recommendation quality.
Moreover, since their location history is likely to be very similar, each of them will also receive recommendations to visit the place that their spouse visited (e.g. including places of ill repute), regardless of whether the latter would like to share this information or not. This creates considerable tension in trying to satisfy those two conflicting goals.
Differential privacy offers tools to overcome these problems. Loosely speaking, it offers the participants plausible deniability
in terms of the estimate. That is, it provides guarantees that the recommendation would also have been issued with sufficiently high probability if another specific participant had not taken this action before. This is precisely the type of guarantee suitable to allay the concerns in the above situation
[8].Recent work, e.g. by Mcsherry and Mironov [18] has focused on designing custom built tools for differential private recommendation. Many of the design decisions in this context are hand engineered, and it is nontrivial to separate the choices made to obtain a differentially private system from those made to obtain a system that works well. Furthermore, none of these systems [18, 35] lead to very fast implementations.
In this paper we show that a large family of recommender systems, namely those using matrix factorization, are well suited to differential privacy. More specifically, we exploit the fact that sampling from the posterior distribution of a Bayesian model, e.g. via Stochastic Gradient Langevin Dynamics (SGLD) [34], can lead to estimates that are sufficiently differentially private [33]. At the same time, their stochastic nature makes them well amenable to efficient implementation. Their generality means that we need not customdesign a statistical model for differential privacy but rather that is possible to retrofit an existing model to satisfy these constraints. The practical importance of this fact cannot be overstated — it means that no costly reengineering of deployed statistical models is needed. Instead, one can simply reuse the existing inference algorithm with a trivial modification to obtain a differentially private model.
This leaves the issue to performance. Some of the best reported results are those using GraphChi [14], which show that stateoftheart recommender systems can be built using just a single PC within a matter of hours, rather than requiring hundreds of computers. In this paper, we show that by efficiently exploiting the power law properties inherent in the data (e.g. most movies are hardly ever reviewed on Netflix), one can obtain models that achieve peak numerical performance for recommendation. More to the point, they are 3 times faster than GraphChi on identical hardware.
In summary, this paper describes the by far the fastest matrix factorization based recommender system and it can be made differentially privately using SGLD without losing performance. Most competing approaches excel at no more than one of those aspects. Specifically,

[leftmargin=0.4cm,topsep=0pt,itemsep=1ex,partopsep=1ex,parsep=1ex]

It is efficient at the state of the art relative to other matrix factorization systems.

[leftmargin=0.2cm,topsep=0pt]

We develop a cache efficient matrix factorization framework for general SGD updates.

We develop a fast SGLD sampling algorithm with bookkeeping to avoid adding the Gaussian noise to the whole parameter space at each updates while still maintaining the correctness of the algorithm.


And it is differentially private.

[leftmargin=0.2cm,topsep=0pt]

We show that sampling from a scaled posterior distribution for matrix factorization system can guarantee userlevel differential privacy.

We present a personalized differentially private method for calibrating each user’s privacy and accuracy.

We only privately release to public, and design a local recommender system for each user.

Experiments confirm that the algorithm can be implemented with high efficiency, while offering very favorable privacyaccuracy tradeoff that nearly matches systems without differential privacy at meaningful privacy level.
2 Background
We begin with an overview of the relevant ingredients, namely collaborative filtering using matrix factorization, differential privacy and a primer in computer architecture. All three are relevant to the understanding of our approach. In particular, some basic understanding of the cache hierarchy in microprocessors is useful for efficient implementations.
2.1 Collaborative Filtering
In collaborative filtering we assume that we have a set of users, rating items. We only observe a small number of entries in the rating matrix . Here means that user rated item . A popular tool [13] to deal with inferring entries in is to approximate by a low rank factorization, i.e.
(1) 
for some , which denotes the dimensionality of the feature space corresponding to each item and movie. In other words, (user,item) interactions are modeled via
(2) 
Here and
denote rowvectors of
and respectively, and and are scalar offsets responsible for a specific user or movie respectively. Finally, is a common bias.A popular interpretation is that for a given item , the elements of measure the extent to which the item possesses those attributes. For a given user the elements of measure the extent of interest that the user has in items that score highly in the corresponding factors. Due to the conditions proposed in the Netflix contest, it is common to aim to minimize the mean squared error of deviations between true ratings and estimates. To address overfitting, a norm penalty is commonly imposed on and . This yields the following optimization problem
A large number of extensions have been proposed for this model. For instance, incorporating corating information [27], neighborhoods, or temporal dynamics [12] can lead to improved performance. Since we are primarily interested in demonstrating the efficacy of differential privacy and the interaction with efficient systems design, we focus on the simple innerproduct model with bias.
Bayesian View.
Note that the above optimization problem can be viewed as an instance of a MaximumaPosteriori estimation problem. That is, one minimizes
where, up to a constant offset
and and likewise for . In other words, we assume that the ratings are conditionally normal, given the inner product , and the factors and
are drawn from a normal distribution. Moreover, one can also introduce priors for
with a Gamma distribution
.While this setting is typically just treated as an afterthought of penalized risk minimization, we will explicitly use this when designing differentially private algorithms. The rationale for this is the deep connection between samples from the posterior and differentially private estimates. We will return to this aspect after introducing Stochastic Gradient Langevin Dynamics.
Stochastic Gradient Descent.
Minimizing the regularized collaborative filtering objective is typically achieved by one of two strategies: Alternating Least Squares (ALS) and stochastic gradient descent (SGD). The advantage of the former is that the problem is biconvex in
and respectively, hence minimizing or are convex. On the other hand, SGD is typically faster to converge and it also affords much better cache locality properties. Instead of accessing e.g. all reviews for a given user (or all reviews for a given movie) at once, we only need to read the appropriate tuples. In SGD each time we update a randomly chosen rating record by:(3) 
One problem of SGD is that trivially parallelizing the procedure requires memory locking and synchronization for each rating, which could significantly hamper the performance. [25] shows that a lockfree scheme can achieve nearly optimal solution when the data access is sparse. We build on this statistical property to obtain a fast system which is suitable for differential privacy.
2.2 Differential Privacy
Differential privacy (DP) [7, 9] aims to provide means to cryptographically protect personal information in the database, while allowing aggregatelevel information to be accurately extracted. In our context this means that we protect userspecific sensitive information while using aggregate information to benefit all users.
Assume the actions of a statistical database are modeled via a randomized algorithm . Let the space of data be and data sets . Define to be the edit distance or Hamming distance between data set and , for instance if and are the same except one data point then we have .
Definition 1 (Differential Privacy).
We call a randomized algorithm differentially private if for all measurable sets and for all such that the hamming distance ,
If we say that is differential private.
The definition states that if we arbitrarily replace any individual data point in a database, the output of the algorithm doesn’t change much. The parameter in the definition controls the maximum amount of information gain about an individual person in the database given the output of the algorithm. When is small, it prevents any forms of linkage attack to individual data record (e.g., linkage of Netflix data to IMDB data [23]). We refer readers to [8]
for detailed interpretations of the differential privacy in statistical testing, Bayesian inference and information theory.
An interesting sideeffect of this definition in the context of collaborative filtering is that it also limits the influence of socalled whales, i.e. of users who submit extremely large numbers of reviews. Their influence is also curtailed, at least under the assumption of an equal level of differential privacy per user. In other words, differential privacy confers robustness for collaborative filtering.
Wang et al. [33] show that posterior sampling with bounded loglikelihood is essentially exponential mechanism [19] therefore protecting differential privacy for free (similar observations were made independently in [21, 5]). Wang et al. [33] also suggests a recent line of works [34, 4, 6] that use stochastic gradient descent for Hybrid Monte Carlo sampling essentially preserve differential privacy with the same algorithmic procedure. The consequence for our application is very interesting: if we trust that the MCMC sampler has converged, i.e. if we get a sample that is approximately drawn from the posterior distribution, then we can use one sample as the private release. If not, we can calibrate the MCMC procedure itself to provide differential privacy (typically at the cost of getting a much poorer solution).
2.3 Computer Architecture
A key difference between generic numerical linear algebra, as commonly used e.g. for deep networks or generalized linear models, and the methods used for recommender systems is the fact that the access properties regarding users and items are highly nonuniform. This is a significant advantage, since it allows us to exploit the caching hierarchy of modern CPUs to benefit from higher bandwidth than what disks or main memory access would permit.
Device  Capacity  Bandwidth read  Bandwidth write 

Hard Disk  3TB  150MB/s  100MB/s 
SSD  256GB  500MB/s  350MB/s 
RAM  16GB  14GB/s  9GB/s 
L3 Cache  6MB  1644GB/s  730GB/s 
L1 Cache  32KB  74135GB/s  4480GB/s 
A typical computer architecture consists of a hard disk, solidstate drive (SSD), randomaccess memory (RAM) and CPU cache. Many factors affect the real available bandwidth, such as read and write patterns, block sizes, etc. We measured this for a desktop computer. See Table 1 for a quick overview. A good algorithm design should be pushing the data flow to CPU cache level and hide the latency from SSD or even RAM and amplify the available bandwidth.
The key strategy in obtaining high throughput collaborative filtering systems is to obtain peak bandwidth on each of the subsystems by efficient caching. That is, if a movie is frequently reused, it is desirable to retain it in the CPU cache. This way, we will neither suffer the high latency (100ns per request) of a random read from memory, nor will we have to pay for the comparably slower bandwidth of RAM relative to the CPU cache. This intuition is confirmed in the observed cache miss rates reported in the experiments in Section 6.
3 Differentially Private
Matrix
Factorization
We start by describing the key ideas and algorithmic framework for differentially private matrix factorization. The method, which involves preprocessing data and then sampling from a scaled posterior distribution, is provably differentially private and has profound statistical implications. Then we will describe a specific Monte Carlo sampling algorithm: Stochastic Gradient Langevin Dynamics (SGLD) and justify its use in our setting. We then come up with a novel way to personalize the privacy protection for individual users. Finally, we discuss how to develop fast cacheefficient solvers to exploit bandwidthlimited hardware such that it can be used for general SGDstyle algorithms.
Our differential privacy mechanism relies on a recent observation that posterior sampling preserves differential privacy, provided that the loglikelihood of each user is uniformly bounded [33]. This simple yet remarkable result suggests that sampling from posterior distribution is differentially private for free to some extent. In our context, the claim is that, if^{1}^{1}1For convenience of notation we will omit the biases from the description below in favor of a slightly more succinct notation. then the method that outputs a sample from
preserves differential privacy. Moreover, when we want to set the privacy loss to another number, we can easily do this by simply rescaling the entire expression by .
The question now is whether is bounded. Since the ratings are bounded between and we can consider a reasonable sublevel set , we have every summand to be bounded by . This does not affect the privacy claim as long as is chosen independent to the data.
could still be large, if some particular users rated many movies. This issue is inevitable even if all observed users have few ratings, since differential privacy also protects users not in the database. We propose two theoreticallyinspired algorithmic solutions to this problem:
 Trimming:

We may randomly delete ratings for those who rated a lot of movies so that the maximum number of ratings from a single user will not be too much larger than the average number of ratings. This procedure is the underlying gem that allows OptSpace (the very first provable matrix factorization based lowrank matrix completion method) [11] to work.
 Reweighting:

Alternatively, one can weight each user appropriately so that those who rated many movies will have smaller weight for each rating. Mcsherry and Mironov [18] used this reweighting scheme for controlling privacy loss. A similar approach is considered in the study of nonuniform and powerlaw matrix completion [20, 29]
, where the weighted trace norm has the same effect as if we reweight the lossfunctions.
In addition, these procedures have their practical benefits for the robustness of the recommendation system, since they prevent any malicious user from injecting too much impact into the system, see e.g., Wang and Xu [32], Mobasher et al. [22]. Another justification of these two procedures is that, if the fully observed matrix is truly in a lowdimensional subspace, neither of these two procedures changes the underlying subspace. Therefore, the solutions should be similar to the nonpreprocessed version.
The procedure for differentially private matrix factorization (DPMF) is summarized in Algorithm 1. Note that this is a conceptual sketch (we will discuss an efficient variant thereof later). The following theorem guarantees that our procedure is indeed differentially private.
Theorem 1.
Algorithm 1 obeys differential privacy if the sample is exact and differential privacy if the sample is from a distribution away from the target distribution in distance.
The proof (given in the appendix), shows that this procedure uses in fact the exponential mechanism [19] with utility function being the negative MF objective and its sensitivity being . Note that this can be extended to considerably more complex models. This is the strength of our approach, namely that a large variety of algorithms can be adapted quite easily to differential privacy capable models.
Statistical properties. How about the utility of this procedure? We argue that we do not lose much accuracy by sampling from the a distribution instead of doing exact optimization. Here we define utility/accuracy to be how well this output predicts for new data.
Our matrix factorization formulation can be treated as a maximum a posteriori (MAP) estimator of the Bayesian Probabilistic Matrix Factorization (BPMF) [26], therefore, this distribution we are sampling from is actually a scaledversion of the posterior distribution.
When , Wang et al. [33] shows that a single sample from the posterior distribution is consistent whenever the Bayesian model that gives rise to is consistent and asymptotically only a factor of
away from matching the CramérRao lower bound whenever the asymptotic normality (BernsteinVon Mises Theorem) of the posterior distribution holds. Therefore, we argue that by taking only one sample from the posterior distribution, our results will not be much worse than estimating the MAP or the posterior mean estimator in BPMF. Moreover, since the results do not collapse to a point estimator, the output from this sampling procedure does not tend to overfit
[34].When we will start to lose accuracy, but since we are still sampling from a scaled posterior distribution, the same statistical property applies and the result remains asymptotically near optimal with asymptotic relative efficiency . In fact, monotonic rescaling of and leaves the relative order of ratings unchanged.
3.1 Personalized Differential Privacy
Another interesting feature of the proposed procedure is that it allows us to calibrate the level of privacy protection for every user independently, via a novel observation that weights assigned to different users are linear in the amount of privacy we can guarantee for that particular user.
We will use the same sampling algorithm, and our guarantees in Theorem 1 still hold. The idea here is that we can customize the system so that we get a lower basic privacy protection for all users, say . As we explained earlier this is the level of privacy that we can get more or less “for free”. The protection of DP is sufficiently strong as to include even those users that are not in the database.
By adjusting the weight parameter, we can make the privacy protection stronger for particular users according to how much they set they want privacy. This procedure makes intuitive sense because if some user wants perfect privacy, we can set their weight to and they are effectively not in the database anymore. For people who do not care about privacy, their ratings will be assigned default weight. Formally, we define personalized differential privacy as follows:
Definition 2 (Personalized Differential Privacy).
An algorithm is personalized differentially private for User in database if for any measureable set in the range of the algorithm
for any and is either or .
We claim that:
Theorem 2.
If we set for User such that
then Algorithm 1 guarantees personalized differential privacy for User .
The proof is a straigtforward verification of the definition. We defer it to the Appendix. Note that if we set (so we are essentially sampling from the posterior distribution), we get Personalized DP for user .
In summary, if we simply set , the method protects differential privacy for everybody at very little cost and by setting the weight vector , we can provide personalized service for users who demands more stringent DP protection. To the best of our knowledge, this is the first method of its kind to protect differential privacy in a personalized fashion.
4 Efficient sampling via SGLD
Clearly, sampling from is nontrivial. For a tractable approach we use a recent MCMC method named stochastic gradient Langevin dynamics (SGLD) [34], which is an annealing of stochastic gradient descent and Langevin dynamics that samples from the posterior distribution [24]. The basic update rule is
(4) 
where is a stochastic gradient computed using only one or a small number of ratings. In other words, the updates are almost identical to those used in stochastic gradient descent. The key difference is that a small amount of Gaussian noise is added to the updates. This allows us to solve it extremely efficiently. We will describe our efficient implementation of this algorithm in Section 5.4.
The basic idea of SGLD is that when we are far away from the basin of convergence, the gradient of the logposterior is much larger than the additional noise so the algorithm behaves like stochastic gradient descent. As we approach the basin of convergence and becomes small, so the noise dominates and it behaves like a Brownian motion. Moreover, as gets small, the probability of accepting the proposal in MetropolisHastings adjustment converges to , so we do not need to do this adjustment at all as the algorithm proceeds, as designed above.
This seemingly heuristic procedure was later shown to be consistent in
[28, 30], where asymptotic “inlaw” and “almost sure” convergence of SGLD to the correct stationary distribution are established. More recently, Teh et al. [31] further strengthens the convergence guarantee to include any finite iterations. This line of work justifies our approach in that if we run SGLD for a large number of iterations, we will end up sampling from the distribution that provides us differential privacy. By taking more iterations, we can make arbitrarily small.5 System Design
The performance improvement over existing libraries such as GraphChi are due to both cache efficient design, prefetching, pipelining, the fact that we exploit the power law property of the data, and by judicious optimization of random number generation. This leads to a system that comfortably surpasses even moderately optimized GPU codes.
We primarily focus on the Stochastic Gradient Descent solver and subsequently we provide some details on how to extend this to SGLD. Inference requires a very large number of following operations on data:

Read a rating triple , possibly from disk, unless the data is sufficiently tiny to fit into RAM.

For each given pair of users and items fetch the vectors and from memory.

Compute the inner product on the CPU.

Update and write their new values to RAM.
To illustrate the impact of these operations consider training a dimensional model on the rating triples of Netflix. Per iteration this requires over 3.2TB read/write operations to RAM. At a main memory bandwidth of 20GB/s and a latency of 100ns for each of the 200 million cache misses each pass would take over 6 minutes. Instead, our code accomplishes this task in approximately 10 seconds by using the steps outlined below.
5.1 Processing Pipeline
To deal with the dataflow from disk to CPU, we use a pipelined design, decomposing global and local state akin to [1]. This means that we process users sequentially, thus reducing the retrieval cost per user, since the operations are amortized over all of their ratings. This effectively halves IO. Moreover, since the data cannot be assumed to fit into RAM, we pipeline reads from disk. This hides latency and avoids stalling the CPUs. The writer thread periodically snapshots the model, i.e. and to disk.
Note that for personalized recommender systems that require considerable personalized hidden state, such as topic models, or autoregressive processes, we may want to write a snapshot of the userspecific data, too.
5.2 Cache Efficiency
The previous reasoning discussed how to keep the data pipeline filled and how to reduce the userspecific cache misses by preaggregating them on disk. Next we need to address cache efficiency with regard to movies. More to the point, we need to exploit cache locality relative to the CPU core rather than simply avoiding cache misses. The basic idea is that each CPU core exactly reads a cache line (commonly 64 bytes) from RAM each time, so algorithm designers should not waste it until that piece of cache line is fully utilized.
We exploit the fact that movie ratings follow a power law [10], as is evident e.g. on Netflix in Figure 1. This means that if we succeed at keeping frequently rated movies in the CPU cache, we should see substantial speedups. Note that traditional matrix blocking tricks, as widely used for matrix multiplications operations are not useful, due to the sparsity of the rating matrix . Instead, we decompose the movies into tiers of popularity. To illustrate, considering a decomposition into three blocks consisting of the Top 500, the Next 4000, and the remaining long tail.
Within each block, we process a batch of users simultaneously. This way we can preserve the associated user vectors in cache and we are likely to cache the movie vectors, too (in particular for the Top 500 block). Also, parallelizing all the updates for multiple users does not require locks. Movie parameters are updated in a Hogwild fashion [25].
This design is particularly efficient for lowdimensional models since the Top 500 block fits into L1 cache (this amounts to 44% of all movie ratings in the Netflix dataset), the Next 4000 fits into L2, and ratings will typically reside in L3. Even in the extreme case of dimensions we can fit about of all ratings into cache, albeit L3 cache.
5.3 Latency Hiding and Prefetching
To avoid the penalty for random requests we perform latency hiding by prefetching. That is, we actively request in advance before the rating is to be updated. For dimensions less than 256, accurate prefetching leads to a dataflow of into L1 cache. Beyond that, the size of the latent variables could be too big to benefit from the lowest level of caching due to limited size of caches in modern computers. We provide a detailed caching analysis in Section 6 to illustrate the effect of these techniques.
5.4 Optimizations for SGLD
The data flow of SGLD is almost analogous to that in SGD, albeit with a number of complications. First off, note that (4) applies to the whole parameter matrix rather than just to a single vector. Following [3] we can derive an unbiased approximation of in (4) which is nonzero only for as follows:
where denote number of rating data rated by all and rated by user respectively. The parameters do not incur any major cost — are diagonal matrices with a Gamma distribution over them. We simply perform Gibbs sampling once per round. However, the most timeconsuming part is to sample the remaining vectors, i.e. P since it both requires dense updates and moreover, it requires many random numbers, which adds nontrivial cost.
 Dense Updates:

Note that unless we encounter the triple all other parameters are only updated by adding Gaussian noise. This means that by keeping track of when a parameter was last updated, we can simply aggregate the updates (the Normal distribution is closed under addition). That is, subsequent additions amount to a single draw from . The is possible since we only need to know the value of whenever we encounter a new triple.
 Table Lookup:

Drawing iid samples from a Gaussian is quite costly, easily dominating all other floating point operations combined. We address this by pregenerating a large table of numbers [17] and then by performing random lookup within the table. More to the point, a lookup table of random numbers is statistically indistinguishable from the truth until we draw samples from it (this follows from the slow rate of convergence for twosample tests), hence a few MB of data suffice. Finally, for cache efficiency, we read contiguous segments with random offset (this adds a small amount of dependence which is easily addressed by using a larger table).
A cautionary note is that the impact of this approach on privacy, namely how it affects the stationary distribution of the SGLD, is unknown. In our experiments, the results are indistinguishable for any moderately sized finite lookup tables (see our experiments in Section 6.4).
6 Experiments and Discussion
We now investigate the efficiency and accuracy of our fast SGD solver and Stochastic Gradient Langevin Dynamics solver, compared with stateoftheart available recommenders. We also explore the differentially private accuracy by using our proposed method while varying different privacy budgets.
6.1 Comparisons
We compare the performance of both the SGD solver and the SGLD solver to other publicly available recommenders and one closedsource solver. In particular, we compare to both CPU and GPU solvers, since the latter tend to excel in massively parallel floating point operations.
 GraphChi

Most of our experiments focus on a direct comparison to GraphChi [14]. This is primarily due the fact that the code for GraphChi is publicly available as open source and its very good performance.
 GraphLab Create

is a closed source data analysis platform [16]. It is currently the fastest recommender system available, being slightly faster than GraphChi. We compared our system to GraphLab Create, albeit without finegrained diagnostics that were possible for GraphChi.
 BidMach

is a GPU based system [37]. It reports runtimes of 90, 129 and 600 seconds respectively for 100, 200 and 500 dimensions using an Amazon g2.2xlarge instance for the Netflix dataset.^{2}^{2}2http://github.com/BIDData/BIDMach/wiki/Benchmarks This is slower than the runtimes of 48, 63, and 83 seconds for 128, 256, and 512 that we achieve without GPU optimization on a c3.8xlarge instance.
 Spark

is a distributed system (Spark MLlib) for inferring recommendations and factorization. In recent comparison the argument has been made that it is somewhat slower^{3}^{3}3http://stanford.edu/~rezab/sparkworkshop/slides/xiangrui.pdf, Slide 31 than GraphLab while being substantially faster than Mahout.
6.2 Data
We use two datasets — the well known Netflix Prize dataset, consisting of a training set of 99M ratings spanning 480k customers and their ratings on almost 18k, each movie being rated at a scale of to stars. Additionally, we use their released validation set which consists of M ratings for validation purposes.
Secondly, we use the Yahoo music recommender dataset, consisting of almost 263M ratings of 635k music items by 1M users. We also use the released validation set which consists of 6M ratings for validation. We rescale each rating at a scale of to . We compare performance on both datasets since their sampling strategies are somewhat incomparable (e.g. Netflix has considerable covariate shift in the test dataset). Moreover, this larger dataset poses further challenges on the cache efficiency due to the larger number of items to be recommended.
6.3 Runtime
For efficient computation, GraphChi first needs to preprocess data into shards by the proposed parallel sliding windows [14]. Once the data is partitioned, it can process the graphs efficiently. For comparison, we partition both rating matrix of Netflix prize data and Yahoo Music data into blocks with each block contains all the ratings come from around 1000 users. Each time our algorithms read one block from disk. For Graphchi and Graphlab Create we use the default partition strategy. We run all the experiments on an Amazon c3.8xlarge instance running Ubuntu 14.04 with 32 CPUs and 60GB RAM.
For SGDbased methods We initialize the initial learning rate and regularizer for Netflix data, and for Yahoo Music data. We update learning rate per round as . We also use the same decay rate for both dataset. For our fast SGLD solver, we set
and hyperparameters
. And we set decay rate for Netflix data and for Yahoo data. In practice to speed up SGLD’s burnin procedure, we multiply learning rate by a temperature parameter [4] in the Gaussian noise with . We set for Netflix data and Yahoo data.Since it is nontrivial to observe the test RMSE error in each epoch when using Graphlab Create, we only report the timing of Graphlab Create and all other methods in Figure
5. Note that we were unable to obtain performance results from BidMach for the Yahoo dataset, since Scala encountered memory management issues. However, we have no reason to believe that the results would be in any way more favorable to BidMach than the findings on the Netflix dataset. For reproducibility the results were carried out on an AWS g2.8xlarge instance.To illustrate the convergence over time. We run all the methods in a fixed number of epochs. That is 15 epochs and 30 epochs respectively because we observe that our SGD solver can reach the convergence at that time. Figure 2 shows our timing results along with convergence while we vary dimensions of the models.
Both of our solvers, i.e. CSGD and Fast SGLD benefit from our caching algorithm. CSGD is around 2 to 3 times faster than GraphChi and Graphlab while simultaneously outperforming the accuracy of GraphChi. The primary reason for the discrepancy in performance can be found in the order in which GraphChi processes data: it partitions data (bother users and items) into random subsets and then optimizes only over one such subblock at a time. While the latter is fast, it negatively affects convergence, as can be seen in Figure 2.
Note that the algorithm required for Fast SGLD is rather more complex, since it performs sampling from the Bayesian posterior. Consequently, it is slower than plain SGD. Nonetheless, its speed is comparable to GraphChi in terms of throughput (despite the latter solving a much simpler problem). One problem of SGLD is that the more complex the models are, the worse its convergence becomes, due to the fact that we are sampling from a large state space. This is possibly due to the slow mixing of SGLD, which is a known problem of SGLD [2]. Improving the mixing rate by considering a more advanced stochastic differential equation based sampler, e.g. [4, 6], while keeping the cache efficiency during the updates will be important future work. To our best knowledge we are the first to report the convergence results of SGLD at this scale.
6.4 Convergence
As described above, the convergence of SGLD and SGD based methods are quite different. We illustrate the convergence on a small dimension in Figure 6. Basically the CSGD can find a MAP estimate using several rounds and then begin overfitting. While SGLD first needs to burnin and then start sampling procedure. Note that SGLD can converge very fast in this case. But for higher dimensions, SGLD is slower to converge. Careful tuning of the learning rate is critical here.
We also investigated the accuracy of the model as a function of the size of the Gaussian lookup table. That is, we checked whether replacing explicit access to samples from the Normal distribution by looking up a consecutive number of precomputed parameters from memory is valid. As can be seen in Figure 4, for all but the smallest sets, this suffices. That is, already once we have more than 10,000 numbers, we no longer need a Gaussian random number generator and the results obtained are essentially indistinguishable (obviously for large numbers of dimensions somewhat more terms are needed).
6.5 Cacheefficient Design
K  SCSGD  GraphChi  

L1 Cache  L3 Cache  L1 Cache  L3 Cache  
16  2.84%  0.43%  12.77%  2.21% 
256  2.85%  0.50%  12.89%  2.34% 
2048  3.3%  1.7%  15%  9.8% 
We show the cache efficiency of CSGD and Graphchi in this section. Our data access pattern can accelerate the hardward cache prefetching. In the meanwhile we also use software prefetching strategies to prefetch movie factors in advance. But software prefetching is usually dangerous in practice while implementing in practice because we need to know the prefetching stride in advance. That is when to prefetch those movie factors. In our experiments we set prefetching stride to 2 empirically. We set the experiments as follows. In each gradient update step given
, once the parameters e.g. and in (3) been read they will stay in cache for a while until they be flushed away by new parameters. What we really care about in this section is if the first time each parameter be read by CPU is already staying in cache or not. If it is not in cache then there will be a cache miss and will push CPU to idle. After that the succeeding updates (the specific updates depend on the algorithms e.g. SGD or SGLD) for and will run on cache level.We use Cachegrind [15] as a cache profiler and analyze cache miss for this purpose. The result in Table 2 shows that our algorithm is quite cache friendly when compared with GraphChi on all dimensions. This is likely due to the way GraphChi ingests data: it traverses one data and item block at a time. As a result it has a less efficient portfolio of access frequency and it needs to fetch data from memory more frequently. We believe this to be both the root cause of decreased computational efficiency and slower convergence in the code.
6.6 Privacy and Accuracy
We now investigate the influence of privacy loss on accuracy. As discussed previously, a small rescaling factor can help us to get a nice bound on the loss function. For private collaborative filtering purposes, we first trim the training data by setting each user’s maximum allowable number of ratings and for the Netflix competition dataset and Yahoo Music data respectively. We set and weight of each user as where is set to 1. According to different trimming strength we have and for Netflix data and Yahoo data respectively. Note that a maximum allowable rating from to is quite reasonable, since in practice most users rate quite a bit fewer than movies (due to the power law nature of the rating distribution). Moreover, for users who have more than ratings, we actually can get a quite a good approximation of their profiles by only using a reasonable size of random samples of these ratings. As such we get a dataset with 33M ratings for Netflix and 100M ratings for Yahoo Music data. We study the prediction accuracy, i.e. the utility of our private method by varying the differential privacy budget for fixed model dimensionality .
The parameters of the experiment are set as follows. For Netflix data, we set , , , . For Yahoo data, we set , and , , . In addition, because we are sampling P we fix regularizer parameters which are estimated by a nonprivate SGLD in this section.
While we are sampling jointly, we essentially only need to release . Users can then apply their own data to get the full model and have a local recommender system:
(5) 
The local predictions, i.e. in our context the utility of differentially private matrix factorization method, along the different privacy loss are shown in Figure 7.
More specifically, the model (5) is a twostage procedure which first takes the differentially private item vectors and then use the latter to obtain locally nonprivate user parameter estimates. This is perfectly admissible since users have no expectation of privacy with regard to their own ratings.
6.7 Rating privacy, user privacy and average personalized privacy
Interpreting the privacy guarantees can be subtle. A privacy loss of as in Figure 7 may seem completely meaningless by Definition 1 and the corresponding results in Mcsherry and Mironov [18] may appear much better.
We first address the comparison to Mcsherry and Mironov [18]. It is important to point out that our privacy loss is stated in terms of user level privacy while the results in Mcsherry and Mironov [18] are stated in terms of rating level privacy, which offers exponentially weaker protection. user differential privacy translates into rating differential privacy. Since in our case, our results suggest that we almost lose no accuracy at all while preserving rating differential privacy with . This matches (and slightly improves) Mcsherry and Mironov [18]’s carefully engineered system.
On the other hand, we note that the plain privacy loss can be a very deceiving measure of its practical level of protection. Definition 1 protects privacy of an arbitrary user, who can be a malicious spammer that rates every movie in a completely opposite fashion as what the learned model would predict. This is a truly paranoid requirement, and arguably not the right one, since we probably should not protect these malicious users to begin with. For an average user, the personalized privacy (Definition 2) guarantee can be much stronger, as the posterior distribution concentrates around models that predict reasonably well for such users. As a result, the loglikelihood associated with these users will be bounded by a much smaller number with high probability. In the example shown in Figure 7, a typical user’s personal privacy loss is about , which helps to reduce the essential privacy loss to a meaningful range.
7 Conclusion
In this paper we described an algorithm for efficient collaborative filtering that is compatible with differential privacy. In particular, we showed that it is possible to accomplish all three goals: accuracy, speed and privacy without any significant sacrifice on either end.
Moreover, we introduced the notion of personalized differential privacy. That is, we defined (and proved) the notion of obtaining estimates that respect different degrees of privacy, as required by individual users. We believe that this notion is highly relevant in today’s information economy where the expectation of privacy may be tempered by, e.g. the cost of the service, the quality of the hardware (cheap netbooks deployed with Windows 8.1 with Bing), and the extent to which we want to incorporate the opinions of users.
Our implementation takes advantage of the caching properties of modern microprocessors. By careful latency hiding we are able to obtain near peak performance. In particular, our implementation is approximately 3 times as fast as GraphChi, the nextfastest recommender system. In sum, this is a strong endorsement of Stochastic Gradient Langevin Dynamics to obtain differentially private estimates in recommender systems while still preserving good utility.
Acknowledgments: Parts of this work were supported by a grant of Adobe Research. Z. Liu was supported by Creative Program of Ministry of Education (IRT13035); Foundation for Innovative Research Groups of NNSF of China (61221063); NSF of China (91118005, 91218301); Pillar Program of NST (2012BAH16F02). Y.X. Wang was supported by NSF Award BCS0941518 to CMU Statistics and Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office.
Proof of Theorem 1.
The DP claim follows by choosing the utility function to be the and apply the exponential mechanism [19] which protects DP by output with probability proportional to Where he sensitivity of function be defined as
All we need to do is to work out the sensitivity for here. By the constraint in and , we know . Since one user contributes only one row to the data the trimming/reweighting procedure ensures that for any and any user, the sensitivity of obeys
as specified in the algorithm. The DP claim is simple (given in Proposition 3 of [33]) and we omit here.
Lastly, we note that the “retry if fail” procedure will always sample from the the correct distribution of conditioned on satisfying our constraint that is bounded, and it does not affect the relative probability ratio of any measurable event in the support of this conditional distribution. ∎
Proof of Theorem 2.
For generality, we assume the parameter vector is and all regularizers is capture in prior . The posterior distribution . For any , if we add (removing has the same proof) a particular user whose loglikelihood is uniformly bounded by . The probability ratio can be factorized into
It follows that
As a result, the whole thing is bounded by .
In Algorithm 1, denote . We are sampling from a distribution proportional to . This is equivalent to taking the above posterior to have the loglikelihood of User bounded by , therefore the algorithm obeys personalized differential privacy for user . Take to be any customized subset of adjustied using we get the expression as claimed. ∎
plus 0.3ex
References
 Ahmed et al. [2012] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. Smola. Scalable Inference in Latent Variable Models. In WSDM, 2012.
 Ahn et al. [2012] S. Ahn, A. Korattikara, and M. Welling. Bayesian posterior sampling via stochastic gradient fisher scoring. In Proceedings of the 29th International Conference on Machine Learning (ICML12), pages 1591–1598, 2012.
 Ahn et al. [2015] S. Ahn, A. Korattikara, N. Liu, S. Rajan, and M. Welling. Large scale distributed bayesian matrix factorization using stochastic gradient MCMC. 2015.
 Chen et al. [2014] T. Chen, E. B. Fox, and C. Guestrin. Stochastic Gradient Hamiltonian Monte Carlo. 32, 2014.
 Dimitrakakis et al. [2014] C. Dimitrakakis, B. Nelson, A. Mitrokotsa, and B. I. Rubinstein. Robust and private bayesian inference. In Algorithmic Learning Theory, pages 291–305. Springer, 2014.
 Ding et al. [2014] N. Ding, C. Chen, R. D. Skeel, and R. Babbush. Bayesian Sampling Using Stochastic Gradient Thermostats. In NIPS, pages 1–14, 2014.
 Dwork [2006] C. Dwork. Differential privacy. In Automata, Languages and Programming, pages 1–12. Springer, 2006.
 Dwork and Roth [2013] C. Dwork and A. Roth. The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science, 9(34):211–407, 2013.
 Dwork et al. [2006] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography, pages 265–284. Springer, 2006.
 Hartstein et al. [2008] A. Hartstein, V. Srinivasan, T. Puzak, and P. Emma. On the nature of cache miss behavior: Is it√ 2. The Journal of InstructionLevel Parallelism, 10:1–22, 2008.
 Keshavan et al. [2009] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. In Advances in Neural Information Processing Systems, pages 952–960, 2009.
 Koren [2009] Y. Koren. Collaborative Filtering with Temporal Dynamics. In KDD, number 4, 2009.
 Koren et al. [2009] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. IEEE Computer Society, pages 42–49, 2009.
 Kyrola et al. [2012] A. Kyrola, G. Blelloch, and C. Guestrin. GraphChi : LargeScale Graph Computation on Just a PC Diskbased Graph Computation. In OSDI, 2012.
 [15] N. P. Laptev. Analysis of cache architectures. Department of Computer Science–University of California Santa Barbara.
 Low et al. [2014] Y. Low, J. E. Gonzalez, A. Kyrola, D. Bickson, C. E. Guestrin, and J. Hellerstein. Graphlab: A new framework for parallel machine learning. arXiv preprint arXiv:1408.2041, 2014.

Marsaglia et al. [2004]
G. Marsaglia, W. W. Tsang, and J. Wang.
Fast Generation of Discrete Random Variables.
Journal of Statistical Software, 11, 2004.  Mcsherry and Mironov [2009] F. Mcsherry and I. Mironov. Differentially Private Recommender Systems : Building Privacy into the Netflix Prize Contenders. In KDD, 2009. ISBN 9781605584959.
 McSherry and Talwar [2007] F. McSherry and K. Talwar. Mechanism design via differential privacy. In Foundations of Computer Science, 2007. FOCS’07. 48th Annual IEEE Symposium on, pages 94–103. IEEE, 2007.
 Meka et al. [2009] R. Meka, P. Jain, and I. S. Dhillon. Matrix completion from powerlaw distributed samples. In Advances in neural information processing systems, pages 1258–1266, 2009.
 Mir [2013] D. J. Mir. Differential privacy: an exploration of the privacyutility landscape. PhD thesis, Rutgers UniversityGraduate SchoolNew Brunswick, 2013.
 Mobasher et al. [2007] B. Mobasher, R. Burke, R. Bhaumik, and C. Williams. Toward trustworthy recommender systems: An analysis of attack models and algorithm robustness. ACM Transactions on Internet Technology (TOIT), 7(4):23, 2007.
 Narayanan and Shmatikov [2008] A. Narayanan and V. Shmatikov. Robust deanonymization of large sparse datasets. In Security and Privacy, 2008. SP 2008. IEEE Symposium on, pages 111–125. IEEE, 2008.

Neal [2011]
R. M. Neal.
Mcmc using hamiltonian dynamics.
Handbook of Markov Chain Monte Carlo
, 2, 2011.  Niu et al. [2011] F. Niu, B. Recht, R. Christopher, and S. J. Wright. Hogwild ! : A LockFree Approach to Parallelizing Stochastic Gradient Descent. In NIPS, pages 1–22, 2011.
 Salakhutdinov [2008] R. Salakhutdinov. Bayesian Probabilistic Matrix Factorization using Markov Chain Monte Carlo. In ICML, 2008.
 Salakhutdinov et al. [2007] R. Salakhutdinov, A. Mnih, and G. Hinton. Restricted Boltzmann Machines for Collaborative Filtering. In ICML, 2007.
 Sato and Nakagawa [2014] I. Sato and H. Nakagawa. Approximation analysis of stochastic gradient langevin dynamics by using fokkerplanck equation and ito process. In Proceedings of the 31st International Conference on Machine Learning (ICML14), pages 982–990, 2014.
 Srebro and Salakhutdinov [2010] N. Srebro and R. R. Salakhutdinov. Collaborative filtering in a nonuniform world: Learning with the weighted trace norm. In Advances in Neural Information Processing Systems, pages 2056–2064, 2010.
 Teh et al. [2014] Y. W. Teh, A. Thiéry, and S. Vollmer. Consistency and fluctuations for stochastic gradient langevin dynamics. arXiv preprint arXiv:1409.0578, 2014.
 Teh et al. [2015] Y. W. Teh, S. J. Vollmer, and K. C. Zygalakis. (Non) asymptotic properties of Stochastic Gradient Langevin Dynamics. arXiv preprint arXiv:1501.00438, 2015.
 Wang and Xu [2012] Y.X. Wang and H. Xu. Stability of matrix factorization for collaborative filtering. In Proceedings of the 29th International Conference on Machine Learning (ICML12), pages 417–424, 2012.
 Wang et al. [2015] Y.X. Wang, S. E. Fienberg, and A. Smola. Privacy for free: Posterior sampling and stochastic gradient monte carlo. In to appear in ICML’15, 2015.
 Welling and Teh [2011] M. Welling and Y. W. Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In ICML, 2011.
 Xin and Jaakkola [2014] Y. Xin and T. Jaakkola. Controlling privacy in recommender systems. In NIPS, 2014.
 Zhang et al. [2012] A. Zhang, N. Fawaz, S. Ioannidis, and A. Montanari. Guess who rated this movie: Identifying users through subspace clustering. arXiv preprint arXiv:1208.1544, 2012.
 Zhao and Canny [2014] H. Zhao and J. F. Canny. High Performance Machine Learning through Codesign and Rooflining. PhD thesis, EECS Department, University of California, Berkeley, Sep 2014.
Comments
There are no comments yet.