1 Introduction
We consider the nonconstraint convex minimization problem
(1) 
where is a smooth and convex function and its derivative is Lipschitz continuous with constant and is a Hilbert space. In practice, but might be assigned with an inner product other than the standard inner product of . Solving minimization problem (1
) is a central task with wide applications in fields of scientific computing, machine learning, and data science, etc.
Due to the eruption of data and the stochasticity of the real world, randomness is introduced to make algorithms more robust and computational affordable. In the following, we will restrict ourselves to randomized algorithms related to the coordinate descent (CD) method [18, 3, 14] and its block variant, i.e., the block CD (BCD) method [18, 3, 2, 27, 1] and propose a new algorithm generalizing the randomized CD (RCD) and randomized BCD (RBCD) methods.
In [16], Nesterov studied a RBCD method for hugescale optimization problems. Assuming the gradient of the objective function is coordinatewise Lipschitz continuous with constants , at each step, the block coordinates is chosen randomly with probability and an optimal block coordinate step with step size is employed. Note that, when , the probability
is uniformly distributed. When
, it is proportional to . It is shown that such an RBCD method converges linearly for a strongly convex function and sublinearly for the convex case. Later, in [1], the cyclic version of the BCD method was studied, namely, each iteration consists of performing a gradient projection step with respect to a certain block taken in a cyclic order. Global sublinear convergence rate was established for convex problems and, when the objective function is strongly convex, a linear convergence rate can be proved. More recently, Wright [28] studied a simple version of RCD that updates one coordinate at each time with uniformly chosen index. It was pointed out that when applying to linear system using leastsqaures formulation, such a RCD is exactly a randomized Kaczmarz method [22, 11]. Similarly, it was shown that RCD convergences sublinearly for convex functions and linearly for strongly convex functions. In [13], Lu developed a randomized block proximal damped newton (RBPDN) method. For solving smooth convex minimization problem, RBPDN uses Newton’s method in each block. Comparing with the Newton’s method, the computational complexity of RBPDN is reduced since the Newton’s step is performed locally on each block. There is a tradeoff between convergence rate and computational complexity. If the dimension of the blocks is too small, i.e. , the Hessian on each block might lose lots of information, which might lead to slow convergence. While if the block’s dimension is large, for example, , then the computation of Hessian inverse on each block might still be expensive.Those existing RCD and RBCD methods can achieve acceleration comparing with standard gradient descent (GD) methods, especially for largescale problems. However, the convergence of RCD and RBCD becomes quite slow when the problem is illconditioned. It is wellknown that, preconditioning techniques can be used to improve the conditioning of an optimization problem, see [19, 18]. While preconditioning techniques can be motivated in different ways, one approach is to look at the problem (1) in endowed by an inner product induced by the preconditioner. Roughly speaking, assuming the preconditioner is symmetric and positivedefinite, we consider the Lipschitz continuity and convexity using the inner product and norm. A good preconditioner means that the condition number measured using norm is relatively small and therefore, the convergence of preconditioned GD (PGD) can be accelerated. The price to pay is the cost of the action of , which might be prohibitive for largesize problem. Moreover, it is also difficult to use the preconditioner in the RCD and RBCD methods due to the fact that the coordinatewise decomposition is essentially based on the norm.
One main idea of the proposed algorithm is to generalize the coordinatewise decomposition to subspace decomposition which is more suitable for the norm. This idea itself is not new. For example, the wellknown multigrid method [23], which is one of the most efficient methods for solving the elliptic problem, can be derived fom subspace correction methods based on a multilevel space decomposition [29]. Its randomized version has been considered in [10]. Recently, in [5], we have developed fast subspace descent (FASD) methods by borrowing the subspace decomposition idea of multigrid methods for solving the optimization problems. In this paper, we provide a randomized version of FASD and abbreviated as RFASD.
A key feature of FASD and RFASD is a subspace decomposition with . Note here we do not require the space decomposition to be a direct sum nor be orthogonal. Indeed, the overlapping/redundancy between the subspaces is crucial to speed up the convergence if the space decomposition is stable in the norm as follows,

(SD) Stable decomposition: there exists a constant , such that
(2)
With such a subspace decomposition, the proposed RFASD method is similar with the RBCD method. At each iteration, RFASD randomly chooses a subspace according to certain sampling distribution, computes a search direction in the subspace, and then update the iterator with an appropriate step size.
Based on standard assumptions, we first prove a sufficient decay inequality. Then coupled with the standard upper bound of the optimality gap, we are able to show RFASD converges sublinearly for convex functions and linearly if the objective function is strongly convex. More importantly, the convergence rate of RFASD depends on the condition number measured in norm and there is no need of inverting directly, only local solves on each subspace is sufficient. Using strongly convex case as an example, we show that the convergence rate is
where is the condition number of measured in the norm, which could be much smaller than the condition number of measures in norm. Here is the number of subspaces and measures the stability of the space decomposition in norm; see (2). Therefore, if we choose a proper preconditioner such that and a stable subspace decomposition such that , after iterations, we have
This indicates an exponential decay rate that is independent of the size of the optimization problem, which shows the potential of the proposed RFASD method for solving largescale optimization problems. In summary, based on a stable subspace decomposition, we can achieve the preconditioning effect by only solving smaller size problems on each subspace, which reduces the computational complexity.
The paper is organized as follows. In Section 2, we set up the minimization problem with proper assumptions on and . Then we propose the RFASD algorithm. In Section 3, based on the stable decomposition assumption, convergence analysis for convex functions and strongly convex function are derived. In Section 4, we give some examples and comparisons between several methods within the framework of RFASD. Numerical experiments results are provided in Section 5 to confirm the theories. Finally, we provide some conclusions in Section 6.
2 Fast Subspace Descent Methods
In this section, we introduce the basic setting of the optimization problem we consider as well as basic definitions, notation, and assumptions. Then we propose the fast subspace descent method based on proper subspace decomposition.
2.1 Problem Setting
We consider the minimization problem (1). The Hilbert space
is a vector space equipped with an inner product
and the norm induced is denoted by . Although our discussion might be valid in general Hilbert spaces, we restrict ourself to the finite dimensional space and without of loss generality we take . In this case, the standard dot product in corresponds to and is denoted by(3) 
The inner produce is induced by a given symmetric positive definite (SPD) matrix and defined as follows,
(4) 
Let be the linear space of all linear and continuous mappings which is called the dual space of . The dual norm w.r.t the norm is defined as: for
(5) 
where denotes the standard duality pair between and . By Riesz representation theorem, can be also treat as a vector and, it is straightforward to verify that
Next, we introduce a decomposition of the space , i.e.,
(6) 
Again we emphasize that the space decomposition is not necessarily a direct sum nor be orthogonal. For each subspace , we assign an inner product induced by a symmetric and positive definite matrix . The product space
is assigned with the product topology: for
In matrix form, is a block diagonal matrix defined on .
We shall make the following assumptions on the objective function:

(LCi) The gradient of is Lipschitz continuous restricted in each subspace with Lipschitz constant , i.e.,

(SC) is strongly convex with strong convexity constant , i.e.,
Let be the natural inclusion and let . In the terminology of multigrid methods, corresponds to the prolongation operator and is the restriction.
2.2 Randomized Fast Subspace Descent Methods
Now, we propose the randomized fast subspace descent (RFASD) algorithm.
(7) 
The nonuniform sampling distribution and the step size requires a priori knowledge of . A conservative plan is to use one upper bound for all subspaces. For example, when the gradient of is Lipschitz continuous with Lipschitz constant , i.e.,
We can set for all and consequently we pick the subspace uniformly and use a uniform step size .
There is a balance between the number of subspaces and the complexity of the subspace solvers. For example, we can choose and thus . But then we need to compute which may cost for which is not practical for largesize problems (e.g., using the standard Gauss elimination to compute leads to ). On the other extreme, we can chose a multilevel decomposition with and . Then the cost to compute is .
One important question is what is a good choice of an norm? Any good preconditioner for the objective function is a candidate. For example, when exists, or its approximation is a good choice since this inherits advantages of the Newton’s method or quasiNewton methods.
Another important question is how to get a stable decomposition based on a given SPD matrix ? There is no satisfactory and universal answer to this question. One can always start from a block coordinate decomposition. When , this leads to the block Newton method considered in [13]. We can then merge small blocks to form a larger one in a multilevel fashion and algebraic multigrid methods [23] can be used in this process to provide a coarsening of the graph defined by the Hessian. In general, efficient and effective space decomposition will be problem dependent. We shall provide an example later on.
2.3 Randomized Full Approximation Storage Scheme
The full approximation storage (FAS) scheme, in the deterministic setting, was proposed in [4] and is a multigrid method for solving nonlinear equations. Several FASlike algorithms for solving optimization problems have been considered in the literature [6, 12, 15, 7], including those that are line searchbased recursive or trust regionbased recursive algorithms
Based on RFASD, we shall propose a randomized FAS (RFAS) algorithm. We first recall FAS in the optimization setting as discussed in [5]. Given a space decomposition Let be a projection operator and, ideally, should provide a good approximation of in the subspace . In addition, is a local objective function. can be the original . Then it coincides with the multilevel optimization methods established by Tai and Xu [24]; see Remark 4.2 in [5]. Given the current approximation , in FAS, the search direction , , is computed by the following steps:
Next we show that if we choose in certain way, RFAS becomes a special case of RFASD. Given the SPD matrices , which induce the inner products on subspaces , , we define the following quadratic local objective functions,
From Algortihm 2, it is easy to see that . Therefore, in this case, RFAS agrees with RFASD. Note that in this setting may not be the Galerkin projection of , i.e. , which is different from RPSD. Nevertheless, the convergence analysis (Theorem 1 and 2) can be still applied but the constants and depends on the choices of .
Consider a slighly more general case that the local objective function is in , then by mean value theorem, from Algorithm 2, we can write the equation of as
which implies and, again, RFAS is a special case of RFASD. Of course, this choice of is impractical because we do not know in general. One practical choice might be , which is the Galerkin projection of the Hessian matrix of original . In this case, RFAS becomes block Newton’s method. The constants and are thus changed as depends on and is different at each iteration.
3 Convergence Analysis
In this section, we shall present a convergence analysis of RFASD. We first discuss the stability of a space decomposition and then prove the crucial sufficient decay property. Then we obtain a linear or sublinear convergence rate by different upper bounds of the optimality gap.
3.1 Stable decomposition
We first introduce the mapping as follow,
We can write and in terms of prolongation and restriction operators.
The space decomposition implies that is surjective. Since for finite dimensional spaces, all norms are equivalent, is a linear and continuous operator and, thus, by the open mapping theorem, there exists a continuous right inverse of . Namely there exists a constant , such that for any , there exists a decomposition with for , and
(8) 
The constant measures the stability of the space decomposition. When the decomposition is orthogonal, . As the adjoint, the operator is injective and bounded below. The following result is essentially from the fact that and
has the same minimum singular value
.Lemma 1
Proof
The first identity is an easy consequence of definitions as: for
We now prove (10). For a given , let . For any , we chose a stable decomposition . Then
Thus,
which completes the proof.
3.2 Sufficient decay
We shall prove a sufficient decay property for the function value. Note that we do not assume is convex but only Lipschitz continuous in each subspace.
Lemma 2
Suppose the objective function and space decomposition satisfy (LCi). Let be the sequence generated by Algorithm 1. Then for all , we have
(11) 
Proof
By the Lipschitz continuity (LCi) and the choice of the step size, we have
Take expectation of conditioned by with probability
As we will show in the next two subsections, based on the above sufficient decay property (11), together with a proper upper bound of the optimality gap, linear or sublinear convergent rate can be obtained for the strongly convex and convex case, respectively.
3.3 Linear convergence for strongly convex functions
To derive the linear convergence for the strongly convex case, we shall use the following upper bound of the optimality gap.
Lemma 3 (Theorem 2.1.10 in [17])
Suppose that satisfies assumption (SC) with constant and is the minimizer of ; then for all ,
(12) 
Now we are ready to show the linear convergence of RFASD when the objective function is strongly convex and the result is summarized in the following theorem.
Theorem 1
Suppose the objective function and space decomposition satisfy (LCi) and (SC) with . Let be the sequence generated by Algorithm 1. Then for all , we have the linear contraction
(13) 
3.4 Sublinear convergence for convex functions
Next, we give the convergence result for convex but not strongly convex objective functions, i.e. in (SC), based on the following bounded level set assumption.

(BL) Bounded level set: is convex and attains its minimum value on a set . There is a finite constant such that the level set for defined by is bounded, that is,
(14)
Lemma 4
Suppose the objective function satisfies (LC) and (BL). Then for all and ,
(15) 
Proof
By convexity and (BL), for and ,
We still use the same step size and show that RFASD converges sublinearly for convex objective function .
Theorem 2
Suppose the objective function and space decomposition satisfy (LCi), (BL) and (SC) with . Let be the sequence generated by Algorithm 1. Then for all , we have
(16) 
where and
Proof
Remark
The parameters , and can be dynamically changing, i.e., as a function of . For example, we can use , which is smaller than . The space decomposition and local Lipschitz constants could also be improved during the iterations. In these cases, we use to denote the constant and the last inequality (17) holds with the second term on the righthandside being replaced by
3.5 Complexity
Based on the convergence results Theorem 1 and 2
, we can estimate the computational complexity of the proposed RFASD method and compare with the GD, PGD, RCD, and RBCD methods. As usual, for a prescribed error
, we first estimate how many iterations are needed to reach the tolerance and then estimate the overall computational complexity based on the cost of each iteration.For gradientbased methods, the main cost per iteration is the evaluation of gradient . In general, it may take operations. In certain cases, the cost could be reduced. For example, when is sparse, e.g., computing one coordinate component of only needs operations, then the computing takes operations. Another example is to use advanced techniques, such as fast multipole method [21, 8], to compute , then the cost could be reduced to . In our discussion, we focus on the general case (referred as dense case) and sparse case. For subspace decomposition type algorithms, including RCD, BRCD, and RFASD, each iteration only needs to compute restricted on one subspace and, therefore, the cost of computing the gradient is (dense case) or (sparse case). Note for RCD. When the preconditioning technique is applied, the extra cost is introduced besides computing the gradient. We assume computing the inverse of an matrix is with , then the extra cost for PGD is since the need of . For the proposed RFASD, the extra cost is reduced to since we only need to compute on each subspace. Now, we summarize the complexity comparison in Table 1.
Convex  Strongly convex  Cost per iteration  

dense  sparse  
GD  
PGD  
RCD  
RBCD  
RFASD 
From Table 1, it is clear that RFASD can take advantage of the preconditioning effect, i.e., or . Meanwhile, there is no need to invert globally and but to compute on each subspace, which reduces the computational cost in the sense that (dense case) or (sparse case) if and . Of course, the key is a stable space decomposition in norm such that the stability constant can be kept small. In next section, we use Nesterov’s “worst” problem [17] as an example to demonstrate how to achieve this in practice.
4 Examples
In this section, we give some examples of the RFASD method and use the example introduced by Nesterov [17] to discuss different methods.
We first recall the Nesterov’s “worst” problem [17]
example
For , consider the nonconstrained minimization problem (1) with
(18) 
where represents the th coordinate of and is a constant integer that defines the intrinsic dimension of the problem. The minimum value of the function is
(19) 
4.1 Randomized block coordinate descent methods
We follow [16] to present the RBCD methods. Let endowed with standard norm , i.e. . Define a partition of the unit matrix
Now we consider the space decomposition where and . Naturally, and in this setting. For each subspace, we also use the norm , i.e.
is the identity matrix of size
. In this setting, RFASD (Algorithm 1) is given by(20) 
This is just the RBCD algorithm proposed in [16]. Moreover, if the space decomposition is coordinatewise, i.e., , , then it is reduced to the RCD method.
Regarding the convergence analysis, since the subspace decomposition is direct and orthogonal in inner product, in this case, we have
which implies that in (SD). Moreover, because the norm is used here, Lipschitz constant and strong convexity constant are measured in norm and, hence, we drop the subscript for those constants. Finally, we apply Theorem 1 and 2 and recovery the classical convergence results of RBCD [16] as follows,

Convex case:

Strongly convex case:
Consider Example 4 with , since norm is used, it is easy to see that and . Therefore, the condition number is and, for strongly convex case, the convergence rate is and, according to Table 1, it requires operations (due to the fact that this problem is sparse) to achieve a given accuracy . This could be quite expensive, even impractical, for large , i.e., largescale problems.
4.2 Randomized fast subspace descent methods
The RFASD method allows us to use a preconditioner without computing . We chose an appropriate norm defined by an SPD matrix . Let
be a space decomposition. For each subspace, we still use the norm. Namely we use for all . One can easily verify that which is the socalled Galerkin projection of to the subspace .
In this case, the averaged Lipschitz constant and the strong convexity constant are measured in norm. Based on Theorem 1 and 2, we naturally have

Convex case:

Strongly convex case:
Comparing with the convergence results of RBCD, the key here is to design an appropriate preconditioner which induces the norm and corresponding stable decomposition such that for convex case or for strongly convex case. Then we will achieve speedup comparing with RCD/RBCD.
Of course, the choices of the preconditioner and the stable decomposition are usually problemdependent. Let us again consider Example 4 with . Note that the objective function can be written in the following matrix format
(21)  
(22) 
where . A good choice of the preconditioner is itself. It is easy to verify that, when measuring in norm, the averaged Lipschitz constant is and the strong convexity constant is . Therefore, the condition number measured in norm is
Comments
There are no comments yet.