Petuum: A New Platform for Distributed Machine Learning on Big Data

12/30/2013 ∙ by Eric P Xing, et al. ∙ 0

What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, allowing ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning (ML) is becoming a primary mechanism for extracting information from data. However, the surging volume of Big Data from Internet activities and sensory advancements, and the increasing needs for Big Models for ultra high-dimensional problems have put tremendous pressure on ML methods to scale beyond a single machine, due to both space and time bottlenecks. For example, the Clueweb 2012 web crawl111http://www.lemurproject.org/clueweb12.php/

contains over 700 million web pages as 27TB of text data, while photo-sharing sites such as Flickr, Instagram and Facebook are anecdotally known to possess 10s of billions of images, again taking up TBs of storage. It is highly inefficient, if possible, to use such big data sequentially in a batch or scholastic fashion in a typical iterative ML algorithm. On the other hand, state-of-the-art image recognition systems have now embraced large-scale deep learning models with billions of parameters 

[14]; topic models with up to topics can cover long-tail semantic word sets for substantially improved online advertising [23, 28]; and very-high-rank matrix factorization yields improved prediction on collaborative filtering problems [32]. Training such big models with a single machine can be prohibitively slow, if possible.

Despite the recent rapid development of many new ML models and algorithms aiming at scalable application [6, 25, 11, 33, 1, 2], adoption of these technologies remains generally unseen in the wider data mining, NLP, vision, and other application communities for big problems, especially those built on advanced probabilistic or optimization programs. We suggest that, from the scalable execution point of view, what prevents many state-of-the-art ML models and algorithms from being more widely applied at Big-Learning scales is the difficult migration from an academic implementation, often specialized for a small, well-controlled computer platform such as desktop PCs and small lab-clusters, to a big, less predictable platform such as a corporate cluster or the cloud, where correct execution of the original programs require careful control and mastery of low-level details of the distributed environment and resources through highly nontrivial distributed programming.

Many platforms have provided partial solutions to bridge this research-to-production gap: while Hadoop [24] is a popular and easy to program platform, the simplicity of its MapReduce abstraction makes it difficult to exploit ML properties such as error tolerance (at least, not without considerable engineering effort to bypass MapReduce limitations), and its performance on many ML programs has been surpassed by alternatives [29, 17]. One such alternative is Spark [29], which generalizes MapReduce and scales well on data while offering an accessible programming interface; yet, Spark does not offer fine-grained scheduling of computation and communication, which has been shown to be hugely advantageous, if not outright necessary, for fast and correct execution of advanced ML algorithms [4]. Graph-centric platforms such as GraphLab [17] and Pregel [18] efficiently partition graph-based models with built-in scheduling and consistency mechanisms; but ML programs such as topic modeling and regression either do not admit obvious graph representations, or a graph representation may not be the most efficient choice; moreover, due to limited theoretical work, it is unclear whether asynchronous graph-based consistency models and scheduling will always yield correct execution of such ML programs. Other systems provide low-level programming interfaces [20, 16], that, while powerful and versatile, do not yet offer higher-level general-purpose building blocks such as scheduling, model partitioning strategies, and managed communication that are key to simplifying the adoption of a wide range of ML methods. In summary, existing systems supporting distributed ML each manifest a unique tradeoff on efficiency, correctness, programmability, and generality.

Figure 1: The scale of Big ML efforts in recent literature. A key goal of Petuum is to enable larger ML models to be run on fewer resources, even relative to highly-specialized implementations.

In this paper, we explore the problem of building a distributed machine learning framework with a new angle toward the efficiency, correctness, programmability, and generality tradeoff. We observe that, a hallmark of most (if not all) ML programs is that they are defined by an explicit objective function over data (e.g., likelihood, error-loss, graph cut), and the goal is to attain optimality of this function, in the space defined by the model parameters and other intermediate variables. Moreover, these algorithms all bear a common style, in that they resort to an iterative-convergent procedure (see Eq. 1

). It is noteworthy that iterative-convergent computing tasks are vastly different from conventional programmatic computing tasks (such as database queries and keyword extraction), which reach correct solutions only if every deterministic operation is correctly executed, and strong consistency is guaranteed on the intermediate program state — thus, operational objectives such as fault tolerance and strong consistency are absolutely necessary. However, an ML program’s true goal is fast, efficient convergence to an optimal solution, and we argue that fine-grained fault tolerance and strong consistency are but one vehicle to achieve this goal, and might not even be the most efficient one.

We present a new distributed ML framework, Petuum, built on an ML-centric optimization-theoretic principle, as opposed to various operational objectives explored earlier. We begin by formalizing ML algorithms as iterative-convergent

programs, which encompass a large space of modern ML such as stochastic gradient descent, MCMC for determining point estimates in latent variable models 

[9], coordinate descent, variational methods for graphical models [11], proximal optimization for structured sparsity problems [3], among others. To our knowledge, no existing ML platform has considered such a wide spectrum of ML algorithms, which exhibit diverse representation abstractions, model and data access patterns, and synchronization and scheduling requirements. So what are the shared properties across such a “zoo of ML algorithms”? We believe that the key lies in the recognition of a clear dichotomy between data (which is conditionally independent and persistent throughout the algorithm) and model (which is internally coupled, and is transient before converging to an optimum). This inspires a simple yet statistically-rooted bimodal approach to parallelism: data parallel and model parallel distribution and execution of a big ML program over a cluster of machines. This data parallel, model parallel

approach keenly exploits the unique statistical nature of ML algorithms, particularly the following three properties: (1) Error tolerance — iterative-convergent algorithms are often robust against limited errors in intermediate calculations; (2) Dynamic structural dependency — during execution, the changing correlation strengths between model parameters are critical to efficient parallelization; (3) Non-uniform convergence — the number of steps required for a parameter to converge can be highly skewed across parameters. The core goal of Petuum is to execute these iterative updates in a manner that quickly converges to an optimum of the ML program’s objective function, by exploiting these three statistical properties of ML, which we argue are fundamental to efficient large-scale ML in cluster environments.

This design principle contrasts that of several existing frameworks discussed earlier. For example, central to the Spark framework [29] is the principle of perfect fault tolerance and recovery, supported by a persistent memory architecture (Resilient Distributed Datasets); whereas central to the GraphLab framework is the principle of local and global consistency, supported by a vertex programming model (the Gather-Apply-Scatter abstraction). While these design principles reflect important aspects of correct ML algorithm execution — e.g., atomic recoverability of each computing step (Spark), or consistency satisfaction for all subsets of model variables (GraphLab) — some other important aspects, such as the three statistical properties discussed above, or perhaps ones that could be more fundamental and general, and which could open more room for efficient system designs, remain unexplored.

To exploit these properties, Petuum introduces three novel system objectives grounded in the aforementioned key properties of ML programs, in order to accelerate their convergence at scale: (1) Petuum synchronizes the parameter states with a bounded staleness guarantee, which achieves provably correct outcomes due to the error-tolerant nature of ML, but at a much cheaper communication cost than conventional per-iteration bulk synchronization; (2) Petuum offers dynamic scheduling policies that take into account the changing structural dependencies between model parameters, so as to minimize parallelization error and synchronization costs; and (3) Since parameters in ML programs exhibit non-uniform convergence costs (i.e. different numbers of updates required), Petuum prioritizes computation towards non-converged model parameters, so as to achieve faster convergence.

To demonstrate this approach, we show how a data-parallel and a model-parallel algorithm can be implemented on Petuum, allowing them to scale to large model sizes with improved algorithm convergence times. This is illustrated in Figure 1

, where Petuum is able to solve a range of ML problems at reasonably large model scales, even on relatively modest clusters (10-100 machines) that are within reach of most ML practitioners. The experiments section provides more detailed benchmarks on a range of ML programs: topic modeling, matrix factorization, deep learning, Lasso regression, and distance metric learning. These algorithms are only a subset of the full open-source Petuum ML library

222Petuum is available as open source at http://petuum.org.

, which includes more algorithms not explored in this paper: random forests, K-means, sparse coding, MedLDA, SVM, multi-class logistic regression, with many others being actively developed for future releases.

2 Preliminaries: On Data and
Model Parallelism

We begin with a principled formulation of iterative-convergent ML programs, which exposes a dichotomy of data and model, that inspires the parallel system architecture (§3), algorithm design (§4), and theoretical analysis (§5) of Petuum. Consider the following programmatic view of ML as iterative-convergent programs, driven by an objective function:

Iterative-Convergent ML Algorithm: Given data and model (i.e., a fitness function such as RMS loss, likelihood, margin), a typical ML problem can be grounded as executing the following update equation iteratively, until the model state (i.e., parameters and/or latent variables) reaches some stopping criteria:

(1)

where superscript denotes iteration. The update function (which improves the loss ) performs computation on data and model state , and outputs intermediate results to be aggregated by . For simplicity, in the rest of the paper we omit

in the subscript with the understanding that all ML programs of our interest here bear an explicit loss function that can be used to monitor the quality of convergence and solution, as oppose to heuristics or procedures not associated such a loss function.

In large-scale ML, both data and model can be very large. Data-parallelism, in which data is divided across machines, is a common strategy for solving Big Data problems, while model-parallelism, which divides the ML model, is common for Big Models. Below, we discuss the (different) mathematical implications of each parallelism (see Fig. 2).

2.1 Data Parallelism

Figure 2: The difference between data and model parallelism: data samples are always conditionally independent given the model, but there are some model parameters that are not independent of each other.

In data-parallel ML, the data is partitioned and assigned to computational workers (indexed by ); we denote the -th data partition by . We assume that the function can be applied to each of these data subsets independently, yielding a data-parallel update equation:

(2)

In this definition, we assume that the outputs are aggregated via summation, which is commonly seen in stochastic gradient descent or sampling-based algorithms. For example, in distance metric learning problem which is optimized with stochastic gradient descent (SGD), the data pairs are partitioned over different workers, and the intermediate results (subgradients) are computed on each partition and are summed before applied to update the model parameters. Other algorithms can also be expressed in this form, such as variational EM algorithms . Importantly, this additive updates property allows the updates to be aggregated at each local worker before transmission over the network, which is crucial because CPUs can produce updates much faster than they can be (individually) transmitted over the network. Additive updates are the foundation for a host of techniques to speed up data-parallel execution, such as minibatch, asynchronous and bounded-asynchronous execution, and parameter servers. Key to the validity of additivity of updates from different workers is the notion of independent and identically distributed (iid) data, which is assumed for many ML programs, and implies that each parallel worker contributes “equally” (in a statistical sense) to the ML algorithm’s progress via , no matter which data subset it uses.

2.2 Model Parallelism

In model-parallel ML, the model is partitioned and assigned to workers and updated therein in parallel, running update functions . Unlike data-parallelism, each update function also takes a scheduling function , which restricts to operate on a subset of the model parameters :

(3)

where we have omitted the data for brevity and clarity. outputs a set of indices , so that only performs updates on — we refer to such selection of model parameters as scheduling.

Unlike data-parallelism which enjoys iid data properties, the model parameters are not, in general, independent of each other (Figure 2), and it has been established that model-parallel algorithms can only be effective if the parallel updates are restricted to independent (or weakly-correlated) parameters [15, 2, 22, 17]. Hence, our definition of model-parallelism includes a global scheduling mechanism that can select carefully-chosen parameters for parallel updating.

The scheduling function opens up a large design space, such as fixed, randomized, or even dynamically-changing scheduling on the whole space, or a subset of, the model parameters. not only can provide safety and correctness (e.g., by selecting independent parameters and thus minimize parallelization error), but can offer substantial speed-up (e.g., by prioritizing computation onto non-converged parameters). In the Lasso example, Petuum uses to select coefficients that are weakly correlated (thus preventing divergence), while at the same time prioritizing coefficients far from zero (which are more likely to be non-converged).

2.3 Implementing Data-
and Model-Parallel Programs

Data- and model-parallel programs are stateful, in that they continually update shared model parameters . Thus, an ML platform needs to synchronize across all running threads and processes, and this should be done in a high-performance non-blocking manner that still guarantees convergence. Ideally, the platform should also offer easy, global-variable-like access to (as opposed to cumbersome message-passing, or non-stateful MapReduce-like functional interfaces). If the program is model-parallel, it may require fine control over parameter scheduling to avoid non-convergence; such capability is not available in Hadoop, Spark nor GraphLab without code modification. Hence, there is an opportunity to address these considerations via a platform tailored to data- and model-parallel ML.

3 Petuum –
a Platform for Distributed ML

A core goal of Petuum is to allow practitioners to easily implement data-parallel and model-parallel ML algorithms. Petuum provides APIs to key systems that make data- and model-parallel programming easier: (1) a parameter server system, which allows programmers to access global model state from any machine via a convenient distributed shared-memory interface that resembles single-machine programming, and adopts a bounded-asychronous consistency model that preserves data-parallel convergence guarantees, thus freeing users from explicit network synchronization; (2) a scheduler, which allows fine-grained control over the parallel ordering of model-parallel updates — in essence, the scheduler allows users to define their own ML application consistency rules.

3.1 Petuum System Design

ML algorithms exhibit several principles that can be exploited to speed up distributed ML programs: dependency structures between parameters, non-uniform convergence of parameters, and a limited degree of error tolerance [10, 4, 15, 30, 16, 17]. Petuum allows practitioners to write data-parallel and model-parallel ML programs that exploit these principles, and can be scaled to Big Data and Big Model applications. The Petuum system comprises three components (Fig. 3): scheduler, workers, and parameter server, and Petuum ML programs are written in C++ (with Java support coming in the near future).

Figure 3: Petuum system: scheduler, workers, parameter servers.

Scheduler: The scheduler system enables model-parallelism, by allowing users to control which model parameters are updated by worker machines. This is performed through a user-defined scheduling function schedule() (corresponding to ), which outputs a set of parameters for each worker — for example, a simple schedule might pick a random parameter for every worker, while a more complex scheduler (as we will show) may pick parameters according to multiple criteria, such as pair-wise independence or distance from convergence. The scheduler sends the identities of these parameters to workers via the scheduling control channel (Fig. 3), while the actual parameter values are delivered through a parameter server system that we will soon explain; the scheduler is responsible only for deciding which parameters to update. Later, we will discuss some of the theoretical guarantees enjoyed by model-parallel schedules.

Several common patterns for schedule design are worth highlighting: the simplest option is a fixed-schedule (schedule_fix()), which dispatches model parameters in a pre-determined order (as is common in existing ML algorithm implementations). Static, round-robin schedules (e.g. repeatedly loop over all parameters) fit the schedule_fix() model. Another type of schedule is dependency-aware (schedule_dep()) scheduling, which allows re-ordering of variable/parameter updates to accelerate model-parallel ML algorithms such as Lasso regression. This type of schedule analyzes the dependency structure over model parameters , in order to determine their best parallel execution order. Finally, prioritized scheduling (schedule_pri()) exploits uneven convergence in ML, by prioritizing subsets of variables according to algorithm-specific criteria, such as the magnitude of each parameter, or boundary conditions such as KKT.

Because scheduling functions schedule() may be compute-intensive, Petuum uses pipelining to overlap scheduling computations schedule() with worker execution, so workers are always doing useful computation. In addition, the scheduler is responsible for central aggregation via the pull() function (corresponding to ), if it is needed.

Workers: Each worker receives parameters to be updated from the scheduler function schedule(), and then runs parallel update functions push() (corresponding to ) on data . Petuum intentionally does not specify a data abstraction, so that any data storage system may be used — workers may read from data loaded into memory, or from disk, or over a distributed file system or database such as HDFS. Furthermore, workers may touch the data in any order desired by the programmer: in data-parallel stochastic algorithms, workers might sample one data point at a time, while in batch algorithms, workers might instead pass through all data points in one iteration. While push() is being executed, the model state is automatically synchronized with the parameter server via the parameter exchange channel, using a distributed shared memory programming interface that conveniently resembles single-machine programming. After the workers finish push(), the scheduler may use the new model state to generate future scheduling decisions.

Parameter Server: The parameter server (PS) provides global access to model parameters , via a convenient distributed shared memory API that is similar to table-based or key-value stores. To take advantage of ML-algorithmic principles, the PS implements the Stale Synchronous Parallel (SSP) consistency model [10, 4], which reduces network synchronization and communication costs, while maintaining bounded-staleness convergence guarantees implied by SSP. We will discuss these guarantees in more detail later.

3.2 Programming Interface

[frame=single] // Petuum Program Structure [frame=single] schedule() // This is the (optional) scheduling function // It is executed on the scheduler machines A_local = PS.get(A) // Parameter server read PS.inc(A,change) // Can write to PS here if needed // Choose variables for push() and return svars = my_scheduling(DATA,A_local) return svars [frame=single] push(p = worker_id(), svars = schedule()) // This is the parallel update function // It is executed on each of P worker machines A_local = PS.get(A) // Parameter server read // Perform computation and send return values to pull() // Or just write directly to PS change1 = my_update1(DATA,p,A_local) change2 = my_update2(DATA,p,A_local) PS.inc(A,change1) // Parameter server increment return change2 [frame=single] pull(svars = schedule(), updates = (push(1), …, push(P)) ) // This is the (optional) aggregation function // It is executed on the scheduler machines A_local = PS.get(A) // Parameter server read // Aggregate updates from push(1..P) and write to PS my_aggregate(A_local,updates) PS.put(A,change) // Parameter server overwrite

Figure 4: Petuum Program Structure.

Figure 4 shows a basic Petuum program, consisting of a central scheduler function schedule(), a parallel update function push(), and a central aggregation function pull(). The model variables are held in the parameter server, which can be accessed at any time from any function via the PS object. The PS object can be accessed from any function, and has 3 functions: PS.get() to read a parameter, PS.inc() to add to a parameter, and PS.put() to overwrite a parameter. With just these operations, the SSP consistency model automatically ensures parameter consistency between all Petuum components; no additional user programming is necessary. Finally, we use DATA to represent the data ; as noted earlier, this can be any 3rd-party data structure, database, or distributed file system.

4 Petuum Parallel Algorithms

Now we turn to development of parallel algorithms for large-scale distributed ML problems, in light of the data and model parallel principles underlying Petuum. We focus on a new data-parallel Distance Metric Learning algorithm, and a new model-parallel Lasso algorithm, but our strategies apply to a broad spectrum of other ML problems as briefly discussed at the end of this section. We show that with the Petuum system framework, we can easily realize these algorithms on distributed clusters without dwelling on low level system programming, or non-trivial recasting of our ML problems into representations such as RDDs or vertex programs. Instead our ML problems can be coded at a high level, more akin to Matlab or R.

4.1 Data-Parallel Distance Metric Learning

Let us first consider a large-scale Distance Metric Learning (DML) problem. DML improves the performance of other ML programs such as clustering, by allowing domain experts to incorporate prior knowledge of the form “data points , are similar (or dissimilar)” [26] — for example, we could enforce that “books about science are different from books about art”. The output is a distance function that captures the aforementioned prior knowledge. Learning a proper distance metric [5, 26] is essential for many distance based data mining and machine learning algorithms, such as retrieval, k-means clustering and k-nearest neighbor (k-NN) classification. DML has not received much attention in the Big Data setting, and we are not aware of any distributed implementations of DML.

The most popular version of DML tries to learn a Mahalanobis distance matrix (symmetric and positive-semidefinite), which can then be used to measure the distance between two samples . Given a set of “similar” sample pairs , and a set of “dissimilar” pairs , DML learns the Mahalanobis distance by optimizing

(4)

where denotes that is required to be positive semidefinite. This optimization problem tries to minimize the Mahalanobis distances between all pairs labeled as similar while separating dissimilar pairs with a margin of 1.

In its original form, this optimization problem is difficult to parallelize due to the constraint set. To create a data-parallel optimization algorithm and implement it on Petuum, we shall relax the constraints via slack variables (similar to SVMs). First, we replace with , and introduce slack variables to relax the hard constraint in Eq.(4), yielding

(5)

Using hinge loss, the constraint in Eq.(5) can be eliminated, yielding an unconstrained optimization problem:

(6)

Unlike the original constrained DML problem, this relaxation is fully data-parallel, because it now treats the dissimilar pairs as iid data to the loss function (just like the similar pairs); hence, it can be solved via data-parallel Stochastic Gradient Descent (SGD). SGD can be naturally parallelized over data, and we partition the data pairs onto machines. Every iteration, each machine randomly samples a minibatch of similar pairs and dissimilar pairs from its data shard, and computes the following update to :

(7)

where is the indicator function.

[frame=single] // Data-Parallel Distance Metric Learning [frame=single] schedule() // Empty, do nothing [frame=single] push() L_local = PS.get(L) // Bounded-async read from param server change = 0 for c=1..C // Minibatch size C (x,y) = draw_similar_pair(DATA) (a,b) = draw_dissimilar_pair(DATA) change += DeltaL(L_local,x,y,a,b) // SGD from Eq 7 PS.inc(L,change/C) // Add gradient to param server [frame=single] pull() // Empty, do nothing

Figure 5: Petuum DML data-parallel pseudocode.

Figure 5 shows pseudocode for Petuum DML, which is simple to implement because the parameter server system PS abstracts away complex networking code under a simple get()/read() API. Moreover, the PS automatically ensures high-throughput execution, via a bounded-asynchronous consistency model (Stale Synchronous Parallel) that can provide workers with stale local copies of the parameters , instead of forcing workers to wait for network communication. Later, we will review the strong consistency and convergence guarantees provided by the SSP model.

Since DML is a data-parallel algorithm, only the parallel update push() needs to be implemented (Figure 5). The scheduling function schedule() is empty (because every worker touches every model parameter ), and we do not need aggregation push() for this SGD algorithm. In our next example, we will show how schedule() and push() can be used to implement model-parallel execution.

4.2 Model-Parallel Lasso

Lasso is a widely used model to select features in high-dimensional problems, such as gene-disease association studies, or in online advertising via -penalized regression [8]. Lasso takes the form of an optimization problem:

(8)

where denotes a regularization parameter that determines the sparsity of , and is a non-negative convex loss function such as squared-loss or logistic-loss; we assume that and y are standardized and consider (8) without an intercept. For simplicity but without loss of generality, we let ; other loss functions (e.g. logistic) are straightforward and can be solved using the same approach [2]. We shall solve this via a coordinate descent (CD) model-parallel approach, similar but not identical to [2, 22].

[frame=single] // Model-Parallel Lasso [frame=single] schedule() for j=1..J // Update priorities for all coeffs beta_j c_j = square(beta_j) + eta // Magnitude prioritization (s_1, …, s_L’) = random_draw(distribution(c_1, …, c_J)) // Choose L¡L’ pairwise-independent beta_j (j_1, …, j_L) = correlation_check(s_1, …, s_L’) return (j_1, …, j_L) [frame=single] push(p = worker_id(), (j_1, …, j_L) = schedule() ) // Partial computation for L chosen beta_j; calls PS.get(beta) (z_p[j_1], …, z_p[j_L]) = partial(DATA[p], j_1, …, j_L) return z_p [frame=single] pull((j_1, …, j_L) = schedule(), (z_1, …, z_P) = (push(1), …, push(P)) ) for a=1..L // Aggregate partial computation from P workers newval = sum_threshold(z_1[j_a], …, z_P[j_a]) PS.put(beta[j_a], newval) // Overwrite to parameter server

Figure 6: Petuum Lasso model-parallel pseudocode.

The simplest parallel CD Lasso , shotgun [2], selects a random subset of parameters to be updated in parallel. We now present a scheduled model-parallel Lasso that improves upon shotgun: the Petuum scheduler chooses parameters that are nearly independent with each other, thus guaranteeing convergence of the Lasso objective. In addition, it prioritizes these parameters based on their distance to convergence, thus speeding up optimization.

Why is it important to choose independent parameters via scheduling? Parameter dependencies affect the CD update equation in the following manner: by taking the gradient of (8), we obtain the CD update for :

(9)

where is a soft-thresholding operator, defined by . In (9), if (i.e., nonzero correlation) and and , then a coupling effect is created between the two features and . Hence, they are no longer conditionally independent given the data: . If the -th and the -th coefficients are updated concurrently, parallelization error may occur, causing the Lasso problem to converge slowly (or even diverge outright).

Petuum’s schedule(), push() and pull() interface is readily suited to implementing scheduled model-parallel Lasso. We use schedule() to choose parameters with low dependency, and to prioritize non-converged parameters. Petuum pipelines schedule() and push(); thus schedule() does not slow down workers running push(). Furthermore, by separating the scheduling code schedule() from the core optimization code push() and pull(), Petuum makes it easy to experiment with complex scheduling policies that involve prioritization and dependency checking, thus facilitating the implementation of new model-parallel algorithms — for example, one could use schedule() to prioritize according to KKT conditions in a constrained optimization problem, or to perform graph-based dependency checking like in Graphlab [17]. Later, we will show that the above Lasso schedule schedule() is guaranteed to converge, and gives us near optimal solutions by controlling errors from parallel execution. The pseudocode for scheduled model parallel Lasso under Petuum is shown in Figure 6.

4.3 Other Algorithms

We have implemented other data- and model-parallel algorithms on Petuum as well. Here, we briefly mention a few, while noting that many others are included in the Petuum open-source library.

Topic Model (LDA): For LDA, the key parameter is the “word-topic” table, that needs to be updated by all worker machines. We adopt a simultaneous data-and-model-parallel approach to LDA, and use a fixed schedule function schedule_fix() to cycle disjoint subsets of the word-topic table and data across machines for updating (via push() and pull()), without violating structural dependencies in LDA.

Matrix Factorization (MF): High-rank decompositions of large matrices for improved accuracy [32] can be solved by a model-parallel approach, and we implement it via a fixed schedule function schedule_fix(), where each worker machine only performs the model update push() on a disjoint, unchanging subset of factor matrix rows.

Deep Learning (DL):

We implemented two types on Petuum: a general-purpose fully-connected Deep Neural Network (DNN) using the cross-entropy loss, and a Convolutional Neural Network (CNN) for image classification based off the open-source Caffe project. We adopt a data-parallel strategy

schedule_fix(), where each worker uses its data subset to perform updates push() to the full model . While this data-parallel strategy could be amenable to MapReduce, Spark and GraphLab, we are not aware of DL implementations on those platforms.

5 Principles and Theory

Figure 7: Key properties of ML algorithms: (a) Non-uniform convergence; (b) Error-tolerant convergence; (c) Dependency structures amongst variables.

Our iterative-convergent formulation of ML programs, and the explicit notion of data and model parallelism, make it convenient to explore three key properties of ML programs — error-tolerant convergence, non-uniform convergence, dependency structures (Fig. 7) — and to analyze how Petuum exploits these properties in a theoretically-sound manner to speed up ML program completion at Big Learning scales.

Some of these properties have previously been successfully exploited by a number of bespoke, large-scale implementations of popular ML algorithms: e.g. topic models [28, 16], matrix factorization [27, 13], and deep learning [14]. It is notable that MapReduce-style systems (such as Hadoop [24] and Spark [29]) often do not fare competitively against these custom-built ML implementations, and one of the reasons is that these key ML properties are difficult to exploit under a MapReduce-like abstraction. Other abstractions may offer a limited degree of opportunity — for example, vertex programming [17] permits graph dependencies to influence model-parallel execution.

5.1 Error tolerant convergence

Data-parallel ML algorithms are often robust against minor errors in intermediate calculations; as a consequence, they still execute correctly even when their model parameters experience synchronization delays (i.e. the workers only see old or stale parameters), provided those delays are strictly bounded [19, 10, 4, 33, 1, 12]. Petuum exploits this error-tolerance to reduce network communication/synchronization overheads substantially, by implementing the Stale Synchronous Parallel (SSP) consistency model [10, 4] on top of the parameter server system, which provides all machines with access to the parameters .

The SSP consistency model guarantees that if a worker reads from parameter server at iteration , it is guaranteed to receive all updates from all workers computed at and before iteration , where is the staleness threshold. If this is impossible because some straggling worker is more than iterations behind, the reader will stop until the straggler catches up and sends its updates. For stochastic gradient descent algorithms (such as the DML program), SSP has very attractive theoretical properties [4], which we partially re-state here:

Theorem 1 (adapted from [4])

SGD under SSP, convergence in probability:

Let be a convex function, where the are also convex. We search for a minimizer via stochastic gradient descent on each component under SSP, with staleness parameter and workers. Let with . Under suitable conditions ( are -Lipschitz and bounded divergence ), we have

where , and .

This means that converges to in probability with an exponential tail-bound; convergence is faster when the observed staleness average

and variance

are smaller (and SSP ensures both are as small as possible). Dai et al. also showed that the variance of x can be bounded, ensuring reliability and stability near an optimum [4].

5.2 Dependency structures

Naive parallelization of model-parallel algorithms (e.g. coordinate descent) may lead to uncontrolled parallelization error and non-convergence, caused by inter-parameter dependencies in the model. Such dependencies have been thoroughly analyzed under fixed execution schedules (where each worker updates the same set of parameters every iteration) [22, 2, 21], but there has been little research on dynamic schedules that can react to changing model dependencies or model state . Petuum’s scheduler allows users to write dynamic scheduling functions — whose output is a set of model indices , telling worker to update — as per their application’s needs. This enables ML programs to analyze dependencies at run time (implemented via schedule()), and select subsets of independent (or nearly-independent) parameters for parallel updates.

To motivate this, we consider a generic optimization problem, which many regularized regression problems — including the Petuum Lasso example — fit into:

(10)

where is separable and has -Lipschitz continuous gradient in the following sense:

(11)

where are

feature vectors. W.l.o.g., we assume that each feature vector

is normalized, i.e., . Therefore for all .

In the regression setting, represents a least-squares loss, represents a separable regularizer (e.g. penalty), and represents the -th feature column of the design (data) matrix, each element in is a separate data sample. In particular, is the correlation between the -th and -th feature columns. The parameters are simply the regression coefficients.

In the context of the model-parallel equation (3), we can map the model , the data , and the update equation to

(12)

where has selected a single coordinate to be updated by worker — thus, coordinates are updated in every iteration. The aggregation function simply allows each update to pass through without change.

The effectiveness of parallel coordinate descent depends on how the schedule selects the coordinates . In particular, naive random selection can lead to poor convergence rate or even divergence, with error proportional to the correlation between the randomly-selected coordinates  [22, 2]. An effective and cheaply-computable schedule involves randomly proposing a small set of features , and then finding features in this set such that for some threshold , where are any two features in the set of . This requires at most evaluations of (if we cannot find features that meet the criteria, we simply reduce the degree of parallelism). We have the following convergence theorem:

Theorem 2

convergence: Let and , for constants . After iterations,

(13)

where and is a minimizer of .

For reference, the Petuum Lasso scheduler uses , augmented with a prioritizer we will describe soon.

In addition to asymptotic convergence, we show that ’s trajectory is close to ideal parallel execution:

Theorem 3

is close to ideal execution: Let be an oracle schedule that always proposes random features with zero correlation. Let be its parameter trajectory, and let be the parameter trajectory of . Then,

(14)

for constants .

The proofs for both theorems can be found in the online supplement333http://petuum.github.io/papers/kdd15_supp.pdf.

is different from Scherrer et al. [22], who pre-cluster all features before starting coordinate descent, in order to find “blocks” of nearly-independent parameters. In the Big Data and especially Big Model setting, feature clustering can be prohibitive — fundamentally, it requires evaluations of for all feature combinations , and although greedy clustering algorithms can mitigate this to some extent, feature clustering is still impractical when is very large, as seen in some regression problems [8]. The proposed only needs to evaluate a small number of every iteration, and we explain next, the random selection can be replaced with prioritization to exploit non-uniform convergence in ML problems.

5.3 Non-uniform convergence

In model-parallel ML programs, it has been empirically observed that some parameters can converge in much fewer/more updates than other parameters [15]. For instance, this happens in Lasso regression because the model enforces sparsity, so most parameters remain at zero throughout the algorithm, with low probability of becoming non-zero again. Prioritizing Lasso parameters according to their magnitude greatly improves convergence per iteration, by avoiding frequent (and wasteful) updates to zero parameters [15].

We call this non-uniform ML convergence, which can be exploited via a dynamic scheduling function whose output changes according to the iteration — for instance, we can write a scheduler that proposes parameters with probability proportional to their current magnitude . This can be combined with the earlier dependency structure checking, leading to a dependency-aware, prioritizing scheduler. Unlike the dependency structure issue, prioritization has not received as much attention in the ML literature, though it has been used to speed up the PageRank algorithm, which is iterative-convergent [31].

The prioritizing schedule can be analyzed in the context of the Lasso problem. First, we rewrite it by duplicating original features with opposite sign: Here, contains features and , for all .

Theorem 4 (Adapted from  [15])

Optimality of
Lasso priority scheduler:
Suppose is the set of indices of coefficients updated in parallel at the -th iteration, and is sufficiently small constant such that , for all . Then, the sampling distribution approximately maximizes a lower bound on .

This theorem shows that a prioritizing scheduler speeds up Lasso convergence by decreasing the objective as much as possible every iteration. The pipelined Petuum scheduler system approximates with , because is unavailable until all computations on have finished (and we want schedule before that happens, so that workers are fully occupied). Since we are approximating, we add a constant to ensure all ’s have a non-zero probability of being updated.

6 Performance

Petuum’s ML-centric system design supports a variety of ML programs, and improves their performance on Big Data in the following senses: (1) Petuum implementations of DML and Lasso achieve significantly faster convergence rate than baselines (i.e., DML implemented on single machine, and Shotgun [2]); (2) Petuum ML implementations can run faster than other platforms (e.g. Spark, GraphLab444We omit Hadoop, as it is well-established that Spark and GraphLab significantly outperform it [29, 17].

), because Petuum can exploit model dependencies, uneven convergence and error tolerance; (3) Petuum ML implementations can reach larger model sizes than other platforms, because Petuum stores ML program variables in a lightweight fashion (on the parameter server and scheduler); (4) for ML programs without distributed implementations, we can implement them on Petuum and show good scaling with an increasing number of machines. We emphasize that Petuum is, for the moment, primarily about allowing ML practitioners to implement and experiment with new data/model-parallel ML algorithms on small-to-medium clusters; Petuum currently lacks features that are necessary for clusters with

machines, such as automatic recovery from machine failure. Our experiments are therefore focused on clusters with 10-100 machines, in accordance with our target users.

Performance of Distance Metric Learning and Lasso

Figure 8: Left: Petuum DML convergence curve with different number of machines from 1 to 4. Right: Lasso convergence curve by Petumm Lasso and Shotgun.

We first demonstrate the performance of DML and lasso, implemented under Petuum. In Figure 8, we showcase the convergence of Petuum and baselines using a fixed model size (we used a distance matrix for DML; 100M features for Lasso). For DML, increasing the number of machines consistently increases the convergence speed. Petuum DML achieves 3.8 times speedup with 4 machines and 1.9 times speedup with 2 machines, demonstrating that Petuum DML has the potential to scale very well with more machines. For Lasso, given the same number of machines, Petuum achieved a significantly faster convergence rate than Shotgun (which randomly selects a subset of parameters to be updated). In the initial stage, Petuum lasso and Shotgun show similar convergence rates because Petuum updates every parameter in the first iteration to “bootstrap” the scheduler (at least one iteration is required to initialize all parameters). After this initial stage, Petuum dramatically decreases the Lasso objective compared to Shotgun, by taking advantage of dependency structures and non-uniform convergence via the scheduler.

Figure 9: Left: Petuum performance: relative speedup vs popular platforms (larger is better). Across ML programs, Petuum is at least 2-10 times faster than popular implementations. Right: Petuum is a good platform for writing cluster versions of existing single-machine algorithms, achieving near-linear speedup with increasing number of machines (Caffe CNN and DML).

Platform Comparison Figure 9 (left) compares Petuum to popular ML platforms (Spark and GraphLab) and well-known cluster implementations (YahooLDA). For two common ML programs of LDA and MF, we show the relative speedup of Petuum over the other platforms’ implementations. In general, Petuum is between 2-6 times faster than other platforms; the differences help to illustrate the various data/model-parallel features in Petuum. For MF, Petuum uses the same model-parallel approach as Spark and GraphLab, but it performs twice as fast as Spark, while GraphLab ran out of memory. On the other hand, Petuum LDA is nearly 6 times faster than YahooLDA; the speedup mostly comes from scheduling , which enables correct, dependency-aware model-parallel execution.

Scaling to Larger Models

Figure 10: Left: LDA convergence time: Petuum vs YahooLDA (lower is better). Petuum’s data-and-model-parallel LDA converges faster than YahooLDA’s data-parallel-only implementation, and scales to more LDA parameters (larger vocab size, number of topics). Right panels: Matrix Factorization convergence time: Petuum vs GraphLab vs Spark. Petuum is fastest and the most memory-efficient, and is the only platform that could handle Big MF models with rank on the given hardware budget.

Here, we show that Petuum supports larger ML models for the same amount of cluster memory. Figure 10 shows ML program running time versus model size, given a fixed number of machines — the left panel compares Petuum LDA and YahooLDA; PetuumLDA converges faster and supports LDA models that are times larger555LDA model size is equal to vocab size times number of topics., allowing long-tail topics to be captured. The right panels compare Petuum MF versus Spark and GraphLab; again Petuum is faster and supports much larger MF models (higher rank) than either baseline. Petuum’s model scalability is the result of two factors: (1) model-parallelism, which divides the model across machines; (2) a lightweight parameter server system with minimal storage overhead.

Fast Cluster Implementations of New ML Programs

We show that Petuum facilitates the development of new ML programs without existing cluster implementations. In Figure 9 (right), we present two instances: first, a cluster version of the open-source Caffe CNN toolkit, created by adding lines of Petuum code. The basic data-parallel strategy was left unchanged, so the Petuum port directly tests Petuum’s efficiency. Compared to the original single-machine Caffe with no network communication, Petuum achieves approaching-linear speedup (-times speedup on 4 machines) due to the parameter server’s SSP consistency for managing network communication. Second, we compare the Petuum DML program against the original DML algorithm proposed in [26] (denoted by Xing2002), which is optimized with SGD on a single-machine (with parallelization over matrix operations). The intent is to show that a fairly simple data-parallel SGD implementation of DML (the Petuum program) can greatly speed up execution over a cluster. The Petuum implementation converges 3.8 times faster than Xing2002 on 4 machines — this provides evidence that Petuum enables data/model-parallel algorithms to be efficiently implemented over clusters.

Experimental settings

We used 3 clusters with varying specifications, demonstrating Petuum’s adaptability to different hardware: “Cluster-1” has machines with 2 AMD cores, 8GB RAM, 1Gbps Ethernet; “Cluster-2” has machines with 64 AMD cores, 128GB RAM, 40Gbps Infiniband; “Cluster-3” has machines with 16 Intel cores, 128GB RAM, 10Gbps Ethernet.

LDA was run on 128 Cluster-1 nodes, using 3.9m English Wikipedia abstracts with unigram (m) and bigram (m) vocabularies. MF and Lasso were run on 10 Cluster-2 nodes, respectively using the Netflix data and a synthetic Lasso dataset with

k samples and 100m features/parameters. CNN was run on 4 Cluster-3 nodes, using a 250k subset of Imagenet with 200 classes, and 1.3m model parameters. The DML experiment was run on 4 Cluster-2 nodes, using the 1-million-sample Imagenet

[7] dataset with 1000 classes (220m model parameters), and 200m similar/dissimilar statements.

References

  • [1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In NIPS, 2011.
  • [2] J. K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for l1-regularized loss minimization. In ICML, 2011.
  • [3] X. Chen, Q. Lin, S. Kim, J. Carbonell, and E. Xing. Smoothing proximal gradient method for general structured sparse learning. In UAI, 2011.
  • [4] W. Dai, A. Kumar, J. Wei, Q. Ho, G. Gibson, and E. P. Xing. High-performance distributed ml at scale through parameter server consistency models. In AAAI. 2015.
  • [5] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In Proceedings of the 24th international conference on Machine learning, pages 209–216. ACM, 2007.
  • [6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Ng. Large scale distributed deep networks. In NIPS 2012, 2012.
  • [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009.
  • [8] H. B. M. et. al. Ad click prediction: a view from the trenches. In KDD, 2013.
  • [9] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 101(Suppl 1):5228–5235, 2004.
  • [10] Q. Ho, J. Cipar, H. Cui, J.-K. Kim, S. Lee, P. B. Gibbons, G. Gibson, G. R. Ganger, and E. P. Xing. More effective distributed ml via a stale synchronous parallel parameter server. In NIPS, 2013.
  • [11] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 14, 2013.
  • [12] A. Kumar, A. Beutel, Q. Ho, and E. P. Xing. Fugue: Slow-worker-agnostic distributed learning for big models on big data. In AISTATS.
  • [13] A. Kumar, A. Beutel, Q. Ho, and E. P. Xing. Fugue: Slow-worker-agnostic distributed learning for big models on big data. In

    Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics

    , pages 531–539, 2014.
  • [14] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng.

    Building high-level features using large scale unsupervised learning.

    In ICML, 2012.
  • [15] S. Lee, J. K. Kim, X. Zheng, Q. Ho, G. Gibson, and E. P. Xing. On model parallelism and scheduling strategies for distributed machine learning. In NIPS. 2014.
  • [16] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su. Scaling distributed machine learning with the parameter server. In OSDI, 2014.
  • [17] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed GraphLab: A Framework for Machine Learning and Data Mining in the Cloud. PVLDB, 2012.
  • [18] G. Malewicz, M. H. Austern, A. J. Bik, J. C. Dehnert, I. Horn, N. Leiser, and G. Czajkowski. Pregel: a system for large-scale graph processing. In ACM SIGMOD International Conference on Management of data. ACM, 2010.
  • [19] F. Niu, B. Recht, C. Ré, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011.
  • [20] R. Power and J. Li. Piccolo: building fast, distributed programs with partitioned tables. In OSDI. USENIX Association, 2010.
  • [21] P. Richtárik and M. Takáč. Parallel coordinate descent methods for big data optimization. arXiv preprint arXiv:1212.0873, 2012.
  • [22] C. Scherrer, A. Tewari, M. Halappanavar, and D. Haglin. Feature clustering for accelerating parallel coordinate descent. NIPS, 2012.
  • [23] Y. Wang, X. Zhao, Z. Sun, H. Yan, L. Wang, Z. Jin, L. Wang, Y. Gao, J. Zeng, Q. Yang, et al. Towards topic modeling for big data. arXiv preprint arXiv:1405.4402, 2014.
  • [24] T. White. Hadoop: The definitive guide. O’Reilly Media, Inc., 2012.
  • [25] S. A. Williamson, A. Dubey, and E. P. Xing.

    Parallel markov chain monte carlo for nonparametric mixture models.

    In ICML, 2013.
  • [26] E. P. Xing, M. I. Jordan, S. Russell, and A. Y. Ng. Distance metric learning with application to clustering with side-information. In Advances in neural information processing systems, pages 505–512, 2002.
  • [27] H.-F. Yu, C.-J. Hsieh, S. Si, and I. Dhillon. Scalable coordinate descent approaches to parallel matrix factorization for recommender systems. In Data Mining (ICDM), 2012 IEEE 12th International Conference on, pages 765–774. IEEE, 2012.
  • [28] J. Yuan, F. Gao, Q. Ho, W. Dai, J. Wei, X. Zheng, E. P. Xing, T.-Y. Liu, and W.-Y. Ma. Lightlda: Big topic models on modest compute clusters. In Accepted to International World Wide Web Conference. 2015.
  • [29] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: cluster computing with working sets. In HotCloud, 2010.
  • [30] Y. Zhang, Q. Gao, L. Gao, and C. Wang. Priter: A distributed framework for prioritized iterative computations. In SOCC, 2011.
  • [31] Y. Zhang, Q. Gao, L. Gao, and C. Wang. Priter: A distributed framework for prioritizing iterative computations. Parallel and Distributed Systems, IEEE Transactions on, 24(9):1884–1893, 2013.
  • [32] Y. Zhou, D. Wilkinson, R. Schreiber, and R. Pan. Large-scale parallel collaborative filtering for the netflix prize. In Algorithmic Aspects in Information and Management, 2008.
  • [33] M. Zinkevich, J. Langford, and A. J. Smola. Slow learners are fast. In NIPS, 2009.

Appendix A Proof of Theorem 2

We prove that the Petuum scheduler makes the Regularized Regression Problem converge. We note that has the following properties: (1) the scheduler uniformly randomly selects out of coordinates (where is the number of features); (2) the scheduler performs dependency checking and retains out of coordinates; (3) in parallel, each of the workers is assigned one coordinate, and performs coordinate descent on it:

(15)

where is the -th partial derivative, and the coordinate is assigned to the -th worker. Note that (15) is simply the gradient update: , followed by applying the proximity operator of .

One way for the scheduler to select coordinates into is to perform dependency checking: Coordinates and are in the same block iff for some parameter . Consider the following matrix

(16)

whose spectral radius will play a major role in our analysis. A trivial bound for the spectral radius is:

(17)

Thus, if is small, the spectral radius is small.

Denote the total number of pairs that can pass the dependency check. Roughly if is close to 1 (i.e., all possible pairs). We assume that each of such pair will be selected by the scheduler with equal probability (i.e., ). This can be achieved by rejection sampling. As a consequence,

, the number of coordinates selected by the scheduler, is a random variable and may vary from step to step. In practice, we assume that

is equal to the number of available workers.

Theorem 2

Let , then after steps, we have

(18)

where and denotes a (global) minimizer of (whose existence is assumed for simplicity).

Proof of Theorem 2

We first bound the algorithm’s progress at step . To avoid cumbersome double indices, let and , then applying (11)

where we define , and the second inequality follows from the optimality of as defined in (15). Therefore as long as , the algorithm is decreasing the objective. This in turn puts a limit on the expected number of parallel workers , roughly inverse proportional to the spectral radius .

The rest of the proof follows the same line as that of shotgun [2]. To give a quick idea, consider the case where , then

and . Thus, defining , we have

(19)
(20)

Using induction it follows that for some universal constant .

The theorem confirms some intuition: The bigger the expected number of selected coordinates , the faster algorithm converges, but it also increases , demonstrating a tradeoff among parallelization and correctness. The variance also plays a role here: the smaller it is, the faster the algorithm converges (since is proportional to it). Of course, the bigger is, i.e., less coordinates are correlated above , the faster the algorithm converges (since is inverse proportional to it).

Remark: We compare Theorem 2 with Shotgun [2] and the Block greedy algorithm in [22]. The convergence rate we get is similar to shotgun, but with a significant difference: Our spectral radius is potentially much smaller than shotgun’s , since by partitioning we zero out all entries in the correlation matrix that are bigger than the threshold . In other words, we get to control the spectral radius while shotgun is totally passive.

The convergence rate in [22] is , where . Compared with ours, we have a bigger (hence worse) numerator ( vs. ) but the denominator ( vs. ) are not directly comparable: we have a bigger spectral radius and bigger while [22] has a smaller spectral radius (essentially taking a submatrix of our ) and smaller . Nevertheless, we note that [22] may have a higher per-step complexity: each worker needs to check all of its assigned coordinates just to update one “optimal” coordinate. In contrast, we simply pick a random coordinate, and hence can be much cheaper per-step.

Appendix B Proof of Theorem 3

For the Regularized Regression Problem, we prove that the Petuum scheduler produces a solution trajectory that is close to ideal execution:

Theorem 3

( is close to ideal execution) Let be an oracle schedule that always proposes random features with zero correlation. Let be its parameter trajectory, and let be the parameter trajectory of . Then,

(21)

is a data dependent constant, is the strong convexity constant, is the domain width of , and is the expected number of indexes that can actually parallelize in each iteration (since it may not be possible to find nearly-independent parameters).

We assume that the objective function is strongly convex — for certain problems, this can be achieved through parameter replication, e.g. is the replicated form of Lasso regression seen in Shotgun [2].

Lemma 1

The difference between successive updates is:

(22)

Proof: The Taylor expansion of around coupled with the fact that (3rd-order) and higher order derivatives are zero leads to the above result.

Proof of Theorem 3

By using Lemma 1, and telescoping sum:

(23)

Since chooses features with 0 correlation,

Again using Lemma 1, and telescoping sum:

(24)

Taking the difference of the two sequences, we have:

(25)

Taking expectations w.r.t. the randomness in iteration, indices chosen at each iteration, and the inherent randomness in the two sequences, we have:

(26)

where is a data dependent constant. Here, the difference between and can only be possible due to .

Following the proof in the shotgun paper [2], we get

(27)

where is a data dependent constant, is the domain width of (i.e. the difference between its maximum and minimum possible values), and is the expected number of indexes that can actually parallelize in each iteration.

Finally, we apply the strong convexity assumption to get

(28)

where is the strong convexity constant.