A Primer on Coordinate Descent Algorithms

09/30/2016 ∙ by Hao-Jun Michael Shi, et al. ∙ The University of Alabama Northwestern University 0

This monograph presents a class of algorithms called coordinate descent algorithms for mathematicians, statisticians, and engineers outside the field of optimization. This particular class of algorithms has recently gained popularity due to their effectiveness in solving large-scale optimization problems in machine learning, compressed sensing, image processing, and computational statistics. Coordinate descent algorithms solve optimization problems by successively minimizing along each coordinate or coordinate hyperplane, which is ideal for parallelized and distributed computing. Avoiding detailed technicalities and proofs, this monograph gives relevant theory and examples for practitioners to effectively apply coordinate descent to modern problems in data science and engineering.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Overview

This monograph discusses a class of algorithms, called coordinate descent

(CD) algorithms, which is useful in solving large-scale optimization problems with smooth or non-smooth and convex or non-convex objective functions. Although these methods have existed since the early development of the discipline and the optimization community did not emphasize them until recently, various modern applications in machine learning, compressed sensing, and large-scale computational statistics have yielded new problems well suited for CD algorithms. These methods are generally applicable to a variety of problems involving large or high-dimensional data sets since they naturally break down complicated optimization problems into simpler subproblems, which are easily parallelized or distributed. For some structured problems, CD has been shown to perform faster than traditional algorithms, such as gradient descent. In addition, CD is generally applicable to non-convex problems and are easier to understand than splitting methods such as the Alternating Direction Method of Multipliers in this aspect. Also, few assumptions are needed to prove convergence to minima for convex problems and stationary points for non-convex problems. In fact, certain CD variants have also been shown to converge for non-convex functions with fairly loose properties.

CD algorithms follow the universal approach to algorithmic, numerical optimization: solving an optimization problem by solving a sequence of simpler subproblems. Each iterate is found by fixing most components of the variable vector

at their current values and approximately minimizing the objective function with the remaining chosen components. In this monograph, we will explore a variety of interesting variants, extensions, and applications of CD connected to many different topics in optimization, statistics, and applied mathematics.

1.2 Formulations

We will consider the following general unconstrained minimization problem:

(1)

where and the function is continuous. Further assumptions may be made on the structure of , such as convexity, Lipschitz continuity, differentiability, etc., while discussing theoretical guarantees for specific algorithms.

In addition, we will consider the following structured problem:

(2)

where is differentiable, and ’s are extended-valued and possibly nondifferentiable functions. Problems appearing in many recent applications such as compressed sensing, statistical variable selection, and model selection can be formulated in the form of (2). Since we allow each to be extended-valued, it can model constraints on by including an indicator function, as discussed in Appendix A. The function can also include certain regularization terms to promote the structure of solutions, such as sparsity and low-rankness. We will further generalize the coordinate separable function ’s to block separable ones in Section 2.1.

1.3 Framework of Coordinate Descent

The basic coordinate descent (CD) framework for (1) and (2) is shown in Algorithm 1. At each iteration, we choose one component and adjust it by a certain update scheme while holding all other components fixed.

1:Set and initialize ;
2:repeat
3:     Choose index ;
4:     Update to by a certain scheme depending on and or ;
5:     Keep unchanged, i.e., , for all ;
6:     Let ;
7:until termination condition is satisfied;
Algorithm 1 Coordinate Descent

Intuitively, CD methods are easily visualized, particularly in the 2-dimensional case. Rather than moving all coordinates along a descent direction, CD changes a chosen coordinate at each iterate, moving as if it were on a grid, with each axis corresponding to each component. Figure 1 illustrates this process, where the coordinate minimization scheme is applied.

Figure 1: CD applied on the quadratic function with initial point . It minimizes alternatingly with respect to one of and while fixing the other. The blue curves correspond to different level curves of the function.

Within this framework, there are many different approaches for choosing an index and updating the selected coordinate. These various rules and updates affect the convergence properties of CD on different types of problems. They may exploit problem-specific structures, such as sparsity. They may also perform differently depending on the conditioning of the problem or how much each coordinate subproblem depends on one another.

The most natural approach to choosing an index is to select components cyclically, i.e. , ,

, and so on. Alternatively, we can select a component at random at each iteration (not necessarily with equal probability). Lastly, we can choose components greedily, choosing the component corresponding to the greatest descent, strongest descent potential, or other scores, at the current iteration. The index rule may also satisfy an

essentially cyclic condition, in which every component is guaranteed to be updated at least once within every iterations. For example, we can perform a cyclic choice of components that are randomly shuffled after each cycle.

Regarding the update schemes, we can simply renew the selected component by minimizing the objective with respect to while fixing the remaining ones. Specifically, for problem (1), we can perform the update:

(3)

and for problem (2), one can have a similar update by replacing with . Other update schemes can be performed if the problem has more structure. Suppose is differentiable in (1), then one can apply coordinate-gradient descent along each chosen component, i.e.

(4)

where is a step size that can be set by line search or according to the property of .

The scheme in (4) can be easily extended to solving (2) as

(5)

where is a subgradient of at . In addition, proximal or prox-linear updates can handle (2) when ’s are not differentiable. These updates minimize a surrogate function that dominates the original objective around the current iterate. The proximal update uses as the surrogate function the sum of the original function and a proximal term, and the prox-linear update employs a surrogate function to be the linearization of the differentiable part plus a proximal term and the nondifferentiable function. They both involve the proximal operator, which for the function is defined as

For many functions with favorable structures, the proximal operator is cheap to compute. We call these functions proximable functions. Please refer to Appendix C for more detail.

If we are minimizing the sum, or the average, of a vast number of functions, we can also apply stochastic updates, which use sample gradients computed by selecting one or a few of these functions at each update instead of the exact gradient. We will discuss these alternative update schemes further in Section 2.

The step size in (4) and (5) can also be chosen in many fashions. The choice of stepsize is important because a descent direction is not sufficient to guarantee descent. If is too large, the function value may increase; if is too small, the algorithm will converge at a relatively slow rate. To select the step size , we may perform an exact line search along the th component, use traditional, backtracking line search methods to obtain sufficient descent, which may be more economical for certain separable objectives, or make predefined choices of based on known properties of .

Useful examples that shed light on the performance of CD on differently structured problems are given in Section 4. We will also discuss related coordinate friendly

analysis and theory, as well as useful heuristics, applied to various problems in Section

3.

1.4 Other Surveys

Coordinate descent algorithms have existed since the formation of the discipline. In response to the rising interest in large-scale optimization, a few articles have recently surveyed this class of algorithms. Wright [83] gives an in-depth review of coordinate descent algorithms, including convergence theory, which we highly recommend for optimizers and practitioners with a stronger background in optimization. Lange [34] provides a survey of optimization algorithms for statistics, including block coordinate descent algorithms.

We emphasize that this paper is specifically targeted towards engineers, scientists, and mathematicians outside of the optimization field, who may not have the requisite knowledge in optimization to understand research articles in this area. Because of this, we do not present the theory of coordinate descent algorithms in a formal manner but emphasize performance on real-world applications. In addition, we avoid discussing specific parallelized and distributed coordinate method implementations in detail since this remains an active area of research and conclusions cannot be drawn without discussing many implementation aspects. We instead give a list of possible approaches toward developing parallelized and distributed algorithms. We believe that the contents of this paper may serve as a guide and toolbox for practitioners to apply coordinate descent to more problems and applications in the future.

1.5 Outline

In Section 2, we give different classes of variants, including different update schemes, indexing schemes, as well as introduce the more generalized block coordinate formulation. In Section 3, we give the relevant theory to analyze problems in practice for CD, and discuss useful heuristics to exploit coordinate friendly structures. In Section 4, we describe modern applications of CD in engineering and data science. In Section 5, we give resources for parallelizing CD for solving large-scale systems. Lastly, in Section 6, we summarize our results from this monograph.

1.6 Notation

We introduce some notation before proceeding. Let be the gradient Lipschitz constant of the differentiable part , i.e. for any , it holds that

Let denote the block-wise gradient Lipschitz constant, i.e. for any ,

where note that may depend on the value of for all . In addition we introduce the notation

when the th block is chosen.

2 Algorithm Variants and Implementations

A wide range of implementation variants of CD have been developed for a large variety of applications. We discuss variants on the classic CD method introduced in Section 1 and describe their strengths and weaknesses below.

2.1 Block Coordinate Descent

Up until this point, we have only considered the method that updates one component of the variable

at each iterate. We may generalize these coordinate updates to block coordinate updates. This method is particularly useful for applications with variables partitioned into blocks, such as non-negative matrix/tensor factorization (e.g., see

[10]), group LASSO [89], and many distributed computing problems, where blocks of variables naturally appear.

Consider the following optimization problem:

(6)

where is decomposed into block variables , is differentiable, and for are extended-valued and possibly nondifferentiable functions. Note that if we consider the block formed by each component, we obtain formulation (2). We may similarly adjust formulation (1) for block variables.

From now on, we consider formulation (6) since it is a generalization of formulation (2). We can simply modify Algorithm 1 to fit this block structured problem, and the modified method is dubbed as Block Coordinate Descent (BCD), given in Algorithm 2.

1:Set and choose ;
2:repeat
3:     Choose index ;
4:     Update to by a certain scheme depending on and ;
5:     Keep for ;
6:     Let
7:until termination condition is satisfied;
Algorithm 2 Block Coordinate Descent

Rather than updating a chosen coordinate at each iterate, block CD seeks to renew a chosen block of coordinates while other blocks are fixed. This method lends itself well for distributed or parallel computing since the update of a block coordinate is typically cheaper than that of all block variables. This will be discussed further in Section 3.

Block CD is also a generalization of the alternating minimization method that has been applied to a variety of problems, and also the expectation-maximization (EM) algorithm

[15], that performs essentially a 2-block CD.

2.2 Update Schemes

As that done in (3), one can simply update by minimizing with respect to while fixing the remaining block variables. However, this update scheme can be hard since its corresponding subproblem may be difficult to solve exactly. In addition, BCD with this update scheme may not converge for some non-smooth and/or non-convex problems. This deficiency motivates the introduction of alternative update schemes that may give easier subproblems and ensure the convergence of the algorithm.

All of the update schemes are summarized below:

  1. (Block) Coordinate Minimization:

  2. (Block) Proximal Point Update:

  3. (Block) Proximal Linear Update (Prox-Linear):

where in the proximal point update, the step size can be any bounded positive number, and in the prox-linear update, the step size can be set to .

Since each update scheme solves a different subproblem, the updates may generate different sequences that converge to different solutions. The coordinate minimization, proximal point, and prox-linear updates may be interpreted as minimizing a surrogate function that upper bounds the original objective function when is chosen appropriately, as noted in the BSUM algorithm [29]. It is important to understand the nuances of each scheme to apply the correct variant for a given application. We describe each update scheme in more detail below.

2.2.1 Block Coordinate Minimization

A natural way to update the selected block is to minimize the objective with respect to with all other blocks fixed, i.e., by the block coordinate minimization scheme:

(7)

This classic scheme is most intuitive and was first introduced in [27] and further analyzed in [4, 16, 23, 42, 78, 81]. BCD with this scheme is guaranteed to converge to a stationary point when the objective is convex, continuously differentiable, and strictly convex on each coordinate. However, BCD may not converge for some nonconvex problems. Powell [56] gives an example for which cyclic BCD with the update scheme in (7) fails to converge to a stationary point. Although the block minimization scheme is most easily accessible and intuitive, alternative update schemes can have greater stability, better convergence properties, or easier subproblems. For Powell’s example, BCD with the proximal point or prox-linear update schemes does converge.

Figure 2: The function is a separable, non-smooth, convex function. The left side is a -radian rotation of for which CD gets stuck at a non-stationary point using any of the presented updates. The right side is a -radian rotation of for which CD can correctly minimize.

Warga [81] provides a convex but non-smooth example, , for which BCD will get stuck at a non-stationary point. We illustrate this issue by rotating the simple function

Figure 2 depicts the and radian rotations of . If we consider the radian rotation of and start at a point of the form where , then coordinate minimization along or will not decrease the objective value since the point is already optimal along each coordinate direction. Since the optimal solution is at , the algorithm will get stuck at a non-optimal point. However, this problem does not occur for the radian rotation of , since it can decrease along the direction.

More mathematically, the rotated functions correspond to setting to and , respectively, in the following function:

We can verify that for any , that is, given any , the minimizer of equals the value of . (The same holds if, instead, we fix and minimize over . The minimizer will equal the value of .) Therefore, starting from any point , a coordinate minimization over or will move to the point or , respectively. If the point is not 0 (the origin), then any further coordinate update cannot reduce the value of . Therefore, CD converges to a non-stationary point.

Though our example gives a case where CD may converge to a non-stationary point if is convex, non-separable, and non-smooth, this may not always be the case. CD will converge for the example since it will not get stuck at any of the contour corners.

A mathematical explanation of this phenomenon is that the componentwise subgradients and do not necessarily mean that the full vector formed by the concatenation of the component subgradients is a subgradient, i.e. . Thus, in the case for , a point in the form of is not a stationary point because . A further explanation on this and subdifferential calculus is detailed in Appendix B.

In general, BCD can fail to converge for an objective function that contains just one non-separable and non-smooth term, even if all the other terms are differentiable or separable. In fact, this failure can occur with any BCD update presented in this paper. However, in this case, though there are no theoretical guarantees, practitioners may still try applying BCD and check optimality conditions for convergence. When the non-separable and non-smooth term has the form and is convex, primal-dual coordinate update algorithms may also be applied since they will decouple from , though this is not a focus in this monograph. Please refer to [55, 53, 21] for more information on primal-dual coordinate update algorithms.

2.2.2 Proximal Point Update

The proximal point update, or proximal update, is defined by the following:

(8)

where serves as a step size and can be any bounded positive number.

The proximal update with BCD was introduced in [2] for convex problems and further analyzed in [23, 59, 86] for possibly nonconvex problems. This update adds a quadratic proximal term to the classic block minimization update described in (7), and thus the function of each subproblem dominates the original objective around the current iterate. This modification gives the proximal update scheme better convergence properties and increased stability, particularly for non-smooth problems.

2.2.3 Prox-Linear Update

The proximal linear update, or prox-linear update, is defined by:

(9)

where the step size can be set as the reciprocal of the Lipschitz constant of with respect to . The difference between the prox-linear update from the proximal update is that the former further linearizes the smooth part to make the subproblem easier.

The prox-linear update scheme was introduced in [80], which proposed a more general framework of block coordinate gradient descent (BCGD) methods. It was later adopted and also popularized by Nesterov [47] in the randomized coordinate descent method. BCD with this update scheme has been analyzed and also applied to both convex and nonconvex problems such as in [4, 28, 47, 80, 87, 91, 90, 95]. In essence, this scheme minimizes a surrogate function that dominates the original objective around the current iterate . Note that when the regularization term vanishes, this update reduces to:

(10)

or equivalently the block gradient descent algorithm:

When BCD with the proximal or prox-linear update scheme is applied to non-convex objectives, it often gives solutions of lower objective values compared with the block coordinate minimization, since small regions containing certain local minima may be avoided by its local proximal or prox-linear approximation. Note that the prox-linear update may take more iterations to reach the same accuracy than the other two schemes. However, it is easier to perform the update and thus may take less total time as demonstrated in [71, 85, 86].

2.2.4 Extrapolation

Though the prox-linear update is easily computed and gives a better solution overall, coordinate minimization and proximal updates tend to make larger objective decreases per iteration. This observation motivates the use of extrapolation to accelerate the convergence of the prox-linear update scheme.

Extrapolation uses the information at an extrapolated point in place of the current point to update the next iterate [3, 35, 48, 68, 86]. In particular, rather than using the partial gradient at for the next update, we instead consider an extrapolated point

(11)

where is an extrapolation weight. This extrapolated point is then used to compute our next update, and it gives the update:

Note that if , the above update reduces to that in (9). Appropriate positive weight can significantly accelerate the convergence of BCD with prox-linear update as demonstrated in [47, 85] while the per-iteration complexity remains almost the same.

2.2.5 Stochastic Gradients

Often in machine learning and other large data applications, we encounter problems with datasets consisting of tens of millions of datapoints and millions of features. Due to the size and scope of these problems, sometimes computing a coordinate gradient may still be extremely expensive. To remedy this, we can introduce stochastic gradients.

Consider the following stochastic program, called the regularized expected risk minimization problem,

(12)

where

is a random variable,

is differentiable, and ’s are certain regularization terms.

The function

may represent some loss due to inaccurate predictions from some classifier or prediction function. To minimize loss for any set of predicted and true parameters, we take the expectation of the loss with respect to some probability distribution modeled by the random variable

. This expectation takes the form of an integral or summation that weights losses for all possible predictions and true parameters.

An interesting case of (12) is when

follows a uniform distribution over

, representing a noninformative prior distribution. In this case, if the potential outcomes are discrete, then the stochastic program in (12) reduces to the empirical risk minimization problem

(13)

where each represents the loss incurred with respect to one sample from the dataset. Therefore, since the problems we are considering often rely on millions of training points, is very large. The empirical risk minimization problem is also often used in place of the expected risk minimization problem when the probability distribution of is unknown.

The stochastic gradient method, also called the stochastic approximation method, (e.g., see [46]) is useful for minimizing objectives in the form of (12) or (13) with large , in which computing the exact gradient or objective becomes overly expensive. To compute the full gradient for (13), one would have to compute

by processing every training example in the dataset. Since this is often infeasible, we can instead sample either one or a small batch of loss functions

, called a mini-batch, to compute a subsampled gradient to use in place of the full gradient, i.e. we use

where is a mini-batch and is the number of sample functions selected from the loss functions ’s.

More generally, for solving (12) or (13) by prox-linear BCD, we may replace the true coordinate gradient with a stochastic approximation, i.e. if the th block is selected,

(14)

where is a stochastic approximation of , is a subsampled gradient, etc.

Though the stochastic prox-linear update may be less accurate, it works well when there is a limited amount of memory available, or when a solution is needed quickly, as discussed in [13, 88].

2.2.6 Variance Reduction Techniques

Alternatively, we can also consider stochastic variance-reduced gradients

, which use a combination of stale gradients with new gradients to reduce the variance in the chosen stochastic gradients. These variance-reduced stochastic gradient algorithms gain a significant speedup in convergence; whereas stochastic gradients only have sublinear convergence rate guarantees, variance-reduced stochastic gradients can have linear convergence rates similar to traditional gradient descent methods on problems with strongly convex objectives.

Consider the problem given above in (13). Let denote the past stored point used at the prior gradient evaluation for function and denote the stored gradient. We list some common stochastic variance-reduced gradients below:

  • SAG [65]: If the th indexed function is chosen at iterate ,

    The current iterate is then taken as and is explicitly stored in a table of gradients.

  • SAGA [14]: If the th indexed function is chosen at iterate ,

    The current iterate is then taken as and is explicitly stored in a table of gradients.

  • SVRG [33]: If the th indexed function is chosen at iterate ,

    where is not updated every step but is updated after a fixed number of iterations. If enough memory is available, individual gradients ’s as well as their average are all stored; otherwise, one can store only the average and evaluate at each iteration (in addition to the usual work to evaluate ).

Another approach, the stochastic dual coordinate ascent algorithm (SDCA) [67], applies randomized dual coordinate ascent to the dual formulation of the problem and gives similar variance reduction properties.

In general, though these methods give better convergence properties, they require more memory to store stale gradients or more computation to evaluate exact gradient. However, they perform better than traditional stochastic gradient methods, and work well when calculating the exact gradient is expensive.

Note that the primary difference between SVRG and SAG is that SVRG makes 2-3x more gradient evaluations if it does not store a table of gradients, whereas SAG uses less gradient evaluations but requires more memory overhead to store gradients. SAGA may be interpreted as the midpoint between SVRG and SAG. The usage of SAG, SAGA, and SVRG is, therefore, problem-dependent.

The variance reduction technique can also be incorporated into the prox-linear BCD. For example, one can use any of the above three ones in the update (14) to accelerate its convergence.

2.2.7 Summative Proximable Functions

We apply the prox-linear update (9) when the function is proximable, that is, when its proximal operator can be evaluated at a low cost. Functions such as -norm, -norm, and -norm, as well as the indicator functions of box constraints, one or two linear constraints, and the standard simplex, are proximable functions. The list can be quite long. Nonetheless, it is not difficult to see that, even if two functions and are both proximable, may not be proximable. Therefore, the update (9) can still be expensive to compute if is the sum of two or more proximable functions.

The summative proximable function is the sum of proximable functions and that satisfy

Because their proximal operator can be obtained by applying the proximal operator of and then that of in a sequential fashion, it is also proximable. Some common examples of summative proximable functions are:

  • , where and is a homogeneous function of order 1 (i.e., for ). Examples of include , , , or and .

  • , where is (discrete) total variation, and is a function with the following property: for any and coordinates and ,

    Examples of such include , , , , or, more generally, for any .

  • [11, Prop. 3.6] scalar function , where and is convex and . An example is the elastic net regularizer [96]: .

The key to these results is an inclusion property: For any , let and , whose minimization conditions are

respectively, and adding them yields

If the property of and gives the inclusion property , then we arrive at the minimization condition of :

Because the first two classes of summative proximable functions are not seen elsewhere to the best of our knowledge, a proof is included in Appendix D.

2.3 Choosing Update Index

In this section, we elaborate on various implementation approaches in choosing the coordinate or block . Since different paths taken in coordinate descent may lead to different minima and different schemes perform differently for both convex and non-convex problems, the choice of the update index for each iterate is crucial for good performance for BCD. Often, it is easy to switch index orders. However, the choice of index affects convergence, possibly resulting in faster convergence or divergence. We describe the index rules more in detail below.

2.3.1 Cyclic Variants

The most natural, deterministic approach for choosing an index is to choose indices in a cyclic fashion, i.e. and

We may also adapt this method and instead cycle through a permutation of , called a shuffled cyclic method. In practice, one may reshuffle the order of the indices after each cycle, or cycle through all coordinates, which may have stronger convergence properties for some applications.

Another approach is to satisfy an essentially cyclic condition, in which for every consecutive iterations, each component is modified at least once. More rigorously, we require

for all .

Cyclic variants are most intuitive and easily implemented. BCD with the deterministic cyclic rule may give poorer performance than that with shuffled cyclic one, as demonstrated in [87] for solving non-negative matrix factorization. Convergence results of cyclic BCD are given in [4, 6, 16, 23, 28, 42, 59, 64, 80, 87, 92].

2.3.2 Randomized Variants

In randomized BCD algorithms, the update component or block is chosen randomly at each iteration. The simplest, most commonly used randomized variant is to simply select with equal probability, or sample uniformly, independent of all choices made in prior iterations.

Other typical randomized variants include sampling without replacement, and considering different, non-uniform probability distributions. We list common sampling methods below:

  1. Uniform sampling [20, 47, 60, 66, 67]: Each block coordinate is chosen with equal probability as we described above, i.e.

  2. Importance sampling [36, 47, 60, 93, 94]: We proportionally weight each block according to its block-wise Lipschitz gradient constant for all . More rigorously, given some , Nesterov [47] proposes the following distribution:

    This scheme generalizes uniform sampling – when , we obtain uniform sampling. Importance sampling has been further studied in [1, 12].

  3. Arbitrary sampling [51, 57, 58, 60, 61]: We pick and update a block arbitrarily, following some assigned probability distribution , i.e.

    where for all and . This sampling scheme generalizes both uniform and importance sampling.

Randomized BCD is well suited for cases in which memory or available data is limited since the computation of a partial derivative is often much cheaper and less memory demanding than computing an entire gradient [47]. Recent work also suggests that randomization improves the convergence rate of BCD in expectation, such as for minimizing generic smooth and simple nonsmooth block-separable convex functions [47, 57, 60, 74]. However, randomized BCD variants have greater per-iteration complexities than cyclic BCD variants since these algorithms have to sample from probability distributions each iteration. Since randomized variants are non-deterministic, results in practice may vary. During the running of a randomized BCD, cache misses are more likely, requiring extra time to move data from slower to faster memory in the memory hierarchy.

2.3.3 Greedy Variants

The last widely used approach for selecting indices are greedy methods. These methods choose “greedily”, or choose the index such that the objective function is minimized most, or the gradient or subgradient has the largest size, in that direction. The simplest variant for smooth functions, called the Gauss-Southwell111The greedy selection rule dates back to Gauss and was popularized by Southwell [72] for linear systems. selection rule (GS), involves choosing such that the gradient of the chosen block is greatest, or mathematically,

(15)

for formulation (1). If each block consists of an individual component, then the norm reduces to the absolute value function. GS can be analyzed in the general CD framework [77, 42] for convex optimization. In the refined analysis [49], it is shown that, except in extreme cases, GS converges faster than choosing random coordinates in terms of the number of iterations, though its per-iteration complexity is higher.

Alternatively, we could choose the component or block that gives maximum improvement, called the Maximum Block Improvement (MBI) rule [9, 38]:

(16)

Motivated by the performance of Lipschitz sampling and Gauss-Southwell’s rule, Nutini, et. al. [49] proposed the Gauss-Southwell-Lipschitz (GSL) rule:

(17)

The GSL rule accounts for varied coordinatewise Lipschitz constants. When the gradients of two coordinates have similar sizes, updating the coordinate that has the smaller will likely lead to greater reduction in the objective value and is thus preferred in the selection.

For non-smooth problems, we list some commonly used GS rules for formulation (6). We let denote the gradient Lipschitz constant and denote the th coordinate gradient Lipschitz constant.

  1. Gauss-Southwell-s rule (GS-s): At each iteration, the coordinate is chosen by

    (18)

    This rule chooses the coordinate with the greatest negative partial derivative, similar to (15). It is popular in minimization [37, 70, 84].

  2. Gauss-Southwell-r rule (GS-r): At each iteration, the coordinate is chosen by

    (19)

    which is equivalent to

    where has the same size as the block gradient direction. This rule chooses the block that maximizes the distance from the current iterate to the block’s following iterate from a proximal gradient update. It has been used in [80, 17, 52]. If the coordinate-wise gradient Lipschitz constant is known, then we can replace by in the GS-r update to obtain the Gauss-Southwell-Lipschitz-r rule (GSL-r).

  3. Gauss-Southwell-q rule (GS-q): At each iteration, the coordinate is chosen by

    (20)

    This rule can be interpreted as the maximum coordinate-wise descent and has been used in [80]. If the coordinate-wise gradient Lipschitz constant is known, then we can replace by in the GS-q update to obtain the Gauss-Southwell-Lipschitz-q rule (GSL-q).

Note that these greedy methods usually require evaluating the whole gradient vector or some other greedy score, and searching for the best index. These greedy scores may be stored and updated using a max-heap, which is further detailed in [44]. To practically implement greedy methods, some terms of these greedy scores may be cached and maintained at each iteration to make these approaches computationally worthwhile. Section 3 provides detailed further explanations.

Greedy coordinate selections are very efficient for sparse optimization since most zero components in the solution are never selected and thus remain zero throughout the iterations [37, 52]. The problem dimension effectively reduces the number of variables that are ever updated, which is relatively small. Consequently, the greedy CD iteration converges in very few iterations. The saved iterations over-weigh the extra cost of ranking the coordinates.

Other simple greedy methods involve perturbing coordinates or blocks for all by some small step size , then evaluating the difference of objective values at those perturbed points with the current point to determine the coordinate or block of steepest descent, i.e.,

where is the standard basis vector consisting of 1 at the th entry and 0’s elsewhere.

2.3.4 Comparison of Index Rules

We summarize the strengths and weaknesses of these common index rules below in Table 1. We also point readers to Section 4 for a comparison of various index rules applied to several examples. Note that no matter which index rule is chosen, CD can stagnate at non-critical points for objectives with terms that are both non-separable and non-smooth.

Method Description Strengths Weaknesses
Cyclic is chosen in a cyclically: i_k = (k modn) + 1 or by a randomly shuffled cycle
  • [nosep, leftmargin =*]

  • Fast, easy implementation

  • Lowest per-iteration complexity

  • Performs well for problems with low coupling between blocks

  • [nosep, leftmargin =*]

  • When coordinates are highly coupled, performance may be worse than randomized and greedy methods and theoretical worst-case complexity is worse

  • When exact coordinate minimization is used on non-convex problems, the points may cycle and fail to converge

Randomized is chosen randomly following some probability distribution given by the vector : P(i_k = j) = p_j
  • [nosep, leftmargin =*]

  • Easier to analyze; often reduces to standard case after taking expectations

  • Empirically avoids local solutions more for some non-convex problems

  • Well suited for parallel computing

  • [nosep, leftmargin =*]

  • Non-deterministic

  • Random data moves are slower and result in cache misses

  • Slightly higher per-iteration complexity than cyclic CD due to pseudo-random number generation

  • 2-3x slower if the coordinates are weakly coupled

Greedy is chosen by a greedy rule, such as greatest descent: i_k = arg max_1 ≤i ≤s ∥ [∇f(x^k - 1)]_i ∥
  • [nosep, leftmargin =*]

  • Convergence take the least number of iterations, both theoretically and empirically

  • Well suited for problems with sparse solutions; can be parallelized

  • [nosep, leftmargin =*]

  • Need computation of greedy scores or rankings, such as gradients or difference vectors

  • Highest per-iteration complexity except when greedy scores can be updated at a low cost

Table 1: Summary and comparison of different index choosing schemes for CD.

3 Coordinate Friendly Structures

As discussed earlier in the monograph, the strong performance and parallelizability of CD rely on solving subproblems that consist of fewer variables and have low complexities and low memory requirements. Therefore, not all problems are amenable for CD, particularly if little computation is saved from using CD relative to the full update for all coordinates. Intuitively, given blocks, a block coordinate update should cost about of the computation for a full update. Thus, identifying coordinate friendly updates for a given problem is crucial to implementing CD effectively.

In this section, we elaborate on different types of structures in optimization problems that make CD computationally worthy. We define the notion of a coordinate friendly (CF) update and coordinate friendly (CF) structure and introduce heuristics to exploit problems with CF structures. This basic theory may help practitioners identify when CD is applicable and computationally worthwhile for their problem, as well as determine which quantities to cache and maintain to improve the performance of CD.

The CF structure was originally presented in [53] for monotone set-valued operators, which apply in more general settings. We refrain from discussing this here and instead replace the notion of an operator with an update mapping.

3.1 Coordinate Friendly Update Mappings

Before we define coordinate friendly update and structure, we introduce terminology and notation on updates. Suppose we are working with equally-sized blocks in , i.e. . Let represent an update mapping or simply update. Applying it to , we obtain the next iterate , i.e.,

In addition, let denote the coordinate update mapping of for block , i.e.,

As an example, consider the least squares problem:

(21)

where and . The update mapping for gradient descent on (21) is given as:

where is the step size, and the coordinate update mapping is then given as

We let denote the number of basic operations necessary to compute quantity from the input . Then the update mapping is called coordinate friendly (CF) if

(22)

In other words, the number of basic operations necessary to compute the coordinate update is about times of the number of basic operations to compute the full update.

Returning to our least squares example, is CF since the coordinate gradient update can be computed by

(23)

which takes operations after precomputing and . In contrast, the full gradient update takes operations with precomputed and .

Intuitively, the definition of coordinate friendly update mappings encapsulates the notion of gaining times speed-up per coordinate update relative to the full update . Equivalently, if we break up our variables into equally-sized blocks, updating each block should require about times operations necessary to compute the full update.

This formulation also matches our intuition for extreme cases; in the coordinate case of updating individual components , this will give an times speed up for computing each coordinate update relative to computing the full update. In the case of only one block containing the entire vector , we gain no speed-up.

We say that the problem is coordinate friendly or has coordinate friendly (CF) structure if there exists a coordinate friendly update for the problem. To readily apply CD, we must recognize and exploit CF structures in optimization problems.

Identifying CF structure in optimization problems is not always trivial. To help with the analysis of optimization problems to find CF structure, we will give some useful heuristics applied to CD implementations in Section 3.2.

The definitions for CF update and structure may be further generalized for blocks of arbitrary lengths, but we refrain from doing so to maintain simplicity.

3.2 Common Heuristics for Exploiting CF Structures

In this section, we introduce some common heuristics that exploit structure in a problem to improve the performance of CD and gain coordinate friendliness.

3.2.1 Precomputation of Non-Variable Quantities

As noted already in the least squares example, one common approach for increasing the computational worthiness of CD is to precompute certain quantities in the update. If certain quantities that do not consist of any of the variables appear in the update, we can precompute them before applying CD to save computation.

For example, consider again the least squares problem given above. Since the full gradient is given by , we can precompute and to avoid recomputing and at each iteration.

Note, however, that this is only efficient when or (recall that has rows and columns). If , then multiplying by then is computationally cheaper, so we avoid precomputation in this case.

3.2.2 Caching and Maintaining Variable-Dependent Quantities

Another approach to save computation is to cache and maintain variable-dependent quantities. In the CF notation, this would refer to storing some quantity in the memory and updating it at each iteration.

For example, for the least squares problem, instead of performing the coordinate update as in (23), we can save the quantity

Then for any , can be evaluated by