1 Introduction
In response to the rapidly increasing interests in big data analytics, various system frameworks have been proposed to scale out ML algorithms to distributed systems, such as Pregel [8], Piccolo [9], GraphLab [7] and YahooLDA [1]. Amongst them, parameter server [10, 5, 3, 9, 10] is a widely used system architecture and programming abstraction that may support a broad range of ML algorithms. Parameter server can be conceptualized as a (usually distributed) keyvalue store that stores model parameters and supports concurrent read and write accesses from distributed clients [5]
. In order to minimize the network overhead of remote accesses, shared parameters are (partially) replicated on client nodes and accesses are serviced from local replicas. Thus proper level of consistency guarantees must be ensured. A desirable consistency model for parameter server must meet two requirements: 1) Correctness of the distributed algorithm can be theoretically proven; 2) Computing power of the system is fully utlized. The consistency model effectively decouples the system implementations from ML algorithms and can be used to reason about the quality of the solution (such as the rate of variance reduction).
Maintaining consistency among replicas is a classic problem in database research. Various consistency models have been used to provide different levels of guarantees for different applications [11]. However, directly applying them usually fails to meet the requirements of a parameter server. It is difficult or impossible to prove algorithm correctness based on a naive eventual consistency model as it fails to bound the delay of seeing other clients’ updates while one client keeps making progress and that may lead to divergence. Stronger consistency models such as sequential consistency and linearizability require serializable updates and that significantly restricts parallelism of the application.
Consistency models employed in modern distributed ML system tend to fall into two extremes: either sequential consitency or no consistency guarantee at all. For example, Distributed GraphLab serializes read and write accesses to vertices and edges by scheduling vertex programs according to a carefully colored graph or by locking [7]. Altougth such a model guarantees the correctness of the algorithm, it may underutilize the distributed system’s computing power. At the other extreme, YahooLDA [1] employs a besteffort consistency model where the system makes best effort to delivery updates but does not make any guarantee. Although the system empirically achieves good performance for LDA, there is no theoretical guarantee that the distributed implementation of the algorithm is correct. It is also unclear whether such system with loose consistency can generalize to a broad range of ML algorithms. In fact, the system can potentially fail if stragglers present or the network bandwith is saturated.
Recently, [5] presented the Stale Synchrnous Parallel (SSP) model, which is a variant of bounded staleness consistency model – a form of eventual consistency. Simimlar to Bulk Synchrnous Parallel (BSP), an execution of SSP consists of multiple iterations and each iteration is composed of a computation phase and a synchronization phase. Updates are sent out only during the synchronization phase. However, in SSP, a client is allowed to go beyond other clients by at most iterations, where is a threshold set by the application. In SSP, accesses to shared parameters are usually serviced by local replicas and network accesses only occur in case the local replica is more than iterations stale. SSP delivers high throughput as it reduces network communication cost. [5]
shows SSP is theoretically sound by proving the convergence of stochastic gradient descent.
Asynchronous parallel model improves system performance over BSP because 1) it uses CPU and network bandwidth in parallel (e.g. sending out updates whenever network bandwidth is available); 2) it does not enforce a synchronization barrier. Also, since the system makes best effort to send out updates (instead of waiting for synchoronization barrer), clients are more likely to compute with fresh data. Thus it may bring algorithmic benefits. It is more difficult to maintain consistency in an asynchronous system as communcation may happen anytime.
We have found that many ML algorithms are sufficiently robust to a bounded amount of inconsistency and thus admits consistency models weaker than serializability. By relaxing the consistency guarantees properly, the system may gain significant throughput improvements. In this paper, we present a few highthroughput consistency models that takes advantange of the robustness of ML algorithms. As shown in [5], even though relaxed consistency improves system throughput, it may result in reduced algorithmic progress periteration (e.g., smaller covergence rate). All our proposed consistency models allows application developers to tune the strictness of the consistency model to achieve the sweet spot. We present the following consistency models:
Clockbounded Asynchronous Parallel (CAP): We apply the concept of “clockbounded” consistency of SSP to asynchronous parameter server. Unlike SSP where updates are sent out only during the synchrnonization phase, CAP propgates updates whenever the netowrk bandwidth is available. Similar to SSP, CAP guarantees bounded staleness  a client must see all updates older than certain timestamp.
Valuebounded Asynchronous Parallel (VAP) This consistency model guarantees a bound on the absolute deviation between any two parameter replicas.
ClockValuebounded Asnychronous Parallel (CVAP) This model combines CAP and VAP to provide stronger consistency guarantee. With such a guarantee, the solution’s quality (e.g. asymptotic variance) can be assessed.
2 Consistency Models
In this section, we present the formal definitions of CAP, VAP and CVAP. Consisder a collection of workers which share access to a set of parameters. A worker writes to a parameter by applying an update in the form of an associate and commutative operation . The asynchronous parameter server propogates out the update at some point of time and the worker usually proceeds without waiting but may block occasionally to maintain consistency. Thus the parameter server may accumulate a set of updates generated by a worker which are not yet synchronized acroos all workers. All our consistency models ensure readmywrites consistency. That is, a worker sees all its previous writes whether or not the updates are synchronized with other workers. Intuitively, readmywrite consistency is desirable as it allows a worker to proceed with more fresh parameter. Our consistency models also ensure FIFO consistency [6]  updates from a single worker are seen by all other workers in the order in which they are issued. Intuitively, this consistency guarantee ensures that the updates are handled as fairly as the network ordering and prevents a worker from being biased by a particular subset of updates from another worker.
2.1 Clockbounded Asynchronous Parallel (CAP)
Informally, Clockbounded Asynchronous Parallel (CAP) ensures all workers are making sufficient progress forward, otherwise, faster workers are blocked to wait for the slow ones. Progress of a worker is represented by “clock”, which is an integer which starts from 0 and is incremented at regular intervals. Updates generated between clock are timestamped with . The consistency model guarantees that a worker with clock sees all other clients’ updates in the range of , where is a userdefined threshold.
We omit the proof of correctness for CAP as the analysis in [5] applies to CAP as well.
2.2 Valuebounded Asynchronous Parallel (VAP)
Since the asynchronous parameter server does not block workers when propagating out updates, a worker may accumulate a set of updates that are only visiable to itself. We refer this set of updates as unsynchronized local updates. As the update operation is associative and commutative, updates may be aggregated by summing them up. The Valuebounded Asynchronous Parallel (VAP) model guarantees that for any worker the accumulated sum s of unsynchronized local updates of any parameter is less than where is a userdefined threshold. When a worker attempts to apply an update that makes the accumulated sum exceed the threshold , the worker is blocked until the system has made a sufficient number of its updates visible to all workers. The VAP model is illustrated by Figure 1.
We should note that the VAP model described above still allows two workers to see two very different set of updates. For any pair of workers, say A and B, even though VAP restricts how they see updates generated within this pair, it makes no guarantees about seeing updates that are generated by other peer workers. For example, it can happen that worker A has seen one update from each other worker, but B hasn’t seen any of them. Thus, VAP only provides a very loose bound on how two workers may read different values: for any two workers A and B, let and be the sum over all updates seen by A and B respectively. Then, the absolute difference between and , , is upper bounded by , where is an upper bound on the magnitude of any update, is the aforementioned userdefined threshold, and is the num
We may provide a stronger consistency guarantee by restricting how workers see other workers’ updates. We refer to an update as a halfsynchronized update if the update is seen by at least other one worker (that did not generate the update), but has not yet been seen by all other workers. In addition to the guarantees provided by weak VAP, the strong VAP model guarantees that for any parameter, the total magnitude of all halfsynchronized updates is bounded by , where and are as defined before. Thus, strong VAP guarantees that for any two workers and , is upper bounded by . Note that this is independent of , the number of workers.
2.3 ClockValuebounded Asynchronous Parallel (CVAP)
CAP may be combined with VAP to provide stronger consistency guarantees. The idea is that CVAP ensure all workers make enough progress but bounds the absolute difference between replicas. CVAP provides the consistency guarantees of both CAP and VAP. As VAP has a strong and a weak version, there are two versions of CVAP correspondingly. As we shown in Section 3, CVAP allows application developers to guarantee a certain level of solution quality.
3 Theoretical Analysis
We choose the Stochastic Gradient Descent algorithm as our example application and prove its convergence under VAP. Our proof utilizes the techniques developed in [5]. As in [5], the VAP model supports operations , where are members of a ring with an abelian operator (such as addition), and a multiplication operator such that where is also in the ring. We shall informally refer to x as the “system state”, as an “update”, and to the operation as “writing an update”.
As defined in section 2.2 the updates u are accumulated and are propagated to server when they are greater than . We call these propagation times as . We define as the accumulated update written by worker at propagation time through the write operation . The updates are a function of the system state x, and under the VAP model, different workers will “see” different, noisy versions of the true state x. is the noisy state read by worker at time , implying that for some function . Formally, in VAP can take:
Value Bounded Staleness:
Fix a staleness . Then, the noisy state is equal to
(1) 
where is some subset of the updates written in between propagations and “window” and does not include updates from worker . In other words, the noisy state consists of three parts:

Guaranteed “prepropagation” updates from begining to , for every worker .

Guaranteed “readmywrites” set that covers all “inwindow” updates made by the querying worker .

Besteffort “inwindow” updates (not counting updates from worker ).
As with [5], VAP also generalizes the Bulk Synchronous Parallel (BSP) model:
BSP Lemma:
Under zero staleness VAP reduces to BSP. Proof: exactly consists of all updates until time .
We now define a reference sequence of states , informally referred to as the “true” sequence :
In other words, we sum updates by first looping over workers mod , then over timeintervals . Now, let us use VAP to bound the difference between the “true” sequence and the noisy views :
Lemma 1:
Assume , and let , so that
(2) 
where is the the difference (i.e. error) between and . We claim that , where is the norm, and is the dimension of x. Proof: As argued earlier, strong VAP implies . The result immediately follows since for all .
Theorem 1 (SGD under VAP):
Suppose we want to find the minimizer of a convex function , via gradient descent on one component at a time. We assume the components are also convex. Let the updates , with decreasing step size . Also let the VAP threshold decrease according to . Then, under suitable conditions ( are Lipschitz and the distance between two points ),
Dividing both sides by , we see that VAP converges at rate when all other quantities are fixed.
3.1 Proof of Theorem 1
We follow the proof of [5]. Define , where is the norm.
Proof:
We have ,
where we have defined . The highlevel idea is to show that , which implies and thus convergence. First, we shall say something about each term .
Lemma 2:
If , then for all ,
Proof:
Thus,
This completes the proof of Lemma 2.
Back to Theorem 1:
Returning to the proof of Theorem 1, we use Lemma 2 to expand the regret :
We now upperbound each of the terms:
and
and
Hence,
This completes the proof of Theorem 1.
4 Parameter Server Design and Implementation
We implemented the proposed consitency models in a parameter server, called Petuum PS. For the purpose of comparison, Petuum PS also implemented SSP. Petuum PS is implemented in C++ and ZeroMQ is used for network communication.
Petuum PS contains a collection of server processes, which holds the shared parameters in a distributed fashion.. An application process accesses the shared parameters via a client library. The client library caches the parameters in order to minimize network communication overhead. An application process may contain multiple threads. The client library allocates a thread cache for each application thread to reduce contention among them. A thread is considered as a worker in our consistency mode description. The parameter server architecture is visualized in Fig 2.
4.1 System Abstraction
Petuum PS organizes shared parameters as tables. Thus a parameter stored in Petuum PS is identified by a triple of table id, row id and column id. Petuum PS supports both dense and sparse rows corresponding to dense and spares column index. A table is distributed across a set of server processes via hash partitioning and row is the unit of data distribution and transmission. The data stored in one table must be of the same type and Petuum PS can support an unlimited number of tables. It’s worth mentioning that our implementation allows different tables to use different consistency model.
Petuum PS supports a set of straightforward APIs for data access. The main functions are:

Get(table_id, row_id, column_id): Retrieve an element from a table.

Inc(table_id, row_id, column_id, delta): Update an element by delta.

Clock(): Increment the worker thread’s clock by 1.
4.2 System Components
Petuum PS’s client library employs a twolevel cache hierachy to hide network access latency and minimize contension among application threads: all application threads within a process share a process cache which caches rows fetched from server. Each thread has its own thread cache and accesses to data are mostly serviced by thread cache as long as memory suffices. In order to support asynchronous computation, thread cache employs a writeback strategy.
In order to support Clockbased consistency models, the system needs to keep track of the clock of each worker. This is achieved via using vector clock. Each client library maintains a vector clock where each entity represents the clock of a thread. The minmum clock in the vector represents the progress of the process. Server treats a process as an entity and its vector clock keeps track of the progress of all processes.
Asynchronous system tends to congest the network with large volume of messages. Our client and server thus batch messages to achieve high throughput. Messages are sent out based on their priorities which might be applicationdependent. We by default prioritize updates with larger magnitude as they are more likely to contribute to convergence.
4.3 Implementing Consistency Models
We implement SSP, CAP, VAP and CVAP in a unified, modular fashion. We realized that those consistency models can be implemented by performing different operations upon application threads accessing parameters. In other words, the exact sematics of the APIs depend on the consistency model being used. Each consistency model is expressed as a Consistency Policy data structure. Each table is associated with a Consistency Controller, which checks Consistency Policy and services user accesses accordingly. The consistency control logic is visualized in Fig 3.
Different semantics of the APIs include network communication and blocking wait for responses. Petuum PS uses three types of network communications:

Client Push: Client pushes one or a batched set of updates to server.

Client Pull: Client pull a row from server

Server Push: Server pushes one or a batched set of updates to relevant clients.
Coupled with proper cache coherence mechanism, those APIs are sufficient for implementing our consistency models.
5 Evaluation
We evaluate our prototype parameter server via topic modeling. Latent Dirichlet Allocation (LDA) [2] is a popular unsupervised model that discovers a latent topic vector for each document. We implemented LDA on our parameter server with the weak VAP model and conducted experiments on a 8node cluster. Each node is equipped with 64 cores and 128GB of main memory and the nodes are connected with a 40 Gbps Ethernet network. We restrict our experiment to use at most 8 cores and 32GB of memory per machine to emmulate cluster with normal machines.
We used a relatively small dataset 20News and evaluted the strong scalability of the system in particular. Statistics of the 20News dataset are shown in Table 1.
20News  

# of docs  11269 
# of words  53485 
# of tokens  1318299 
We fixed the number of topcis to be 2000 while varying the number of workers. We assign each worker a core exclusively. We show the results in Figure 5 where the number of cores used ranges from 8 to 32. The curve showed the speed up using PetuumPS vs. ideal linear scalability. Even though our experiments are conducted on a relatively small scale, the results show that it has a great potential to scale up.
Acknowledgments
We thank PRObE [4] and CMU PDL Consortium for providing testbed and technical support for our experiments.
References
 [1] Amr Ahmed, Moahmed Aly, Joseph Gonzalez, Shravan Narayanamurthy, and Alexander J. Smola. Scalable inference in latent variable models. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, WSDM ’12, pages 123–132, New York, NY, USA, 2012. ACM.
 [2] David M. Blei, Andrew Ng, and Michael Jordan. Latent dirichlet allocation. JMLR, 3:993–1022, 2003.
 [3] James Cipar, Qirong Ho, Jin Kyu Kim, Seunghak Lee, Gregory R. Ganger, Garth Gibson, Kimberly Keeton, and Eric Xing. Solving the straggler problem with bounded staleness. In Proc. of the 14th Usenix Workshop on Hot Topics in Operating Systems, HotOS ’13. Usenix, 2013.
 [4] Garth Gibson, Gary Grider, Andree Jacobson, and Wyatt Lloyd. Probe: A thousandnode experimental cluster for computer systems research. volume 38, June 2013.
 [5] Qirong Ho, James Cipar, Henggang Cui, Seunghak Lee, Jin Kyu Kim, Phillip B. Gibbons, Garth A. Gibson, Greg Ganger, and Eric Xing. More effective distributed ml via a stale synchronous parallel parameter server. In Advances in Neural Information Processing Systems 26, pages 1223–1231. 2013.
 [6] R. J. Lipton and J. S Sandberg. Pram: A scalable shared memory. Technical Report CSTR18088, Princeton University, Sep 1988.

[7]
Yucheng Low, Danny Bickson, Joseph Gonzalez, Carlos Guestrin, Aapo Kyrola, and
Joseph M. Hellerstein.
Distributed graphlab: A framework for machine learning and data mining in the cloud.
Proc. VLDB Endow., 5(8):716–727, April 2012.  [8] Grzegorz Malewicz, Matthew H. Austern, Aart J.C Bik, James C. Dehnert, Ilan Horn, Naty Leiser, and Grzegorz Czajkowski. Pregel: A system for largescale graph processing. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, SIGMOD ’10, pages 135–146, New York, NY, USA, 2010. ACM.
 [9] Russell Power and Jinyang Li. Piccolo: Building fast, distributed programs with partitioned tables.
 [10] Alexander Smola and Shravan Narayanamurthy. An architecture for parallel topic models. Proc. VLDB Endow., 3(12):703–710, September 2010.
 [11] Doug Terry. Replicated data consistency explained through baseball. Commun. ACM, 56(12):82–89, December 2013.