1 Introduction
For a collection of real Hilbert spaces , consider the problem of finding such that
(1) 
where are linear and bounded operators, are maximal monotone operators and additionally there exists a subset such that for all the operator is Lipschitz continuous. An important instance of this problem is
(2) 
where every is closed, proper and convex, with some subset of the functions also being differentiable with Lipschitzcontinuous gradients. Under appropriate constraint qualifications, (1) and (2) are equivalent. Problem (2
) arises in a host of applications such as machine learning, signal and image processing, inverse problems, and computer vision; see
[4, 9, 11] for some examples. Operator splitting algorithms are now a common way to solve structured monotone inclusions such as (1). Until recently, there were three underlying classes of operator splitting algorithms: forwardbackward [26], Douglas/PeacemanRachford [24], and forwardbackwardforward [32]. In [13], Davis and Yin introduced a new operator splitting algorithm which does not reduce to any of these methods. Many algorithms for more complicated monotone inclusions and optimization problems involving many terms and constraints are in fact applications of one of these underlying techniques to a reduced monotone inclusion in an appropriately defined product space [6, 21, 12, 5, 10]. These four operator splitting techniques are, in turn, a special case of the KrasnoselskiiMann (KM) iteration for finding a fixed point of a nonexpansive operator [23, 25].A different, relatively recently proposed class of operator splitting algorithms is projective splitting: this class has a different convergence mechanism based on projection onto separating sets and does not in general reduce to the KM iteration. The root ideas underlying projective splitting can be found in [20, 30, 31] which dealt with monotone inclusions with a single operator. The algorithm of [17] significantly built on these ideas to address the case of two operators and was thus the original projective “splitting” method. This algorithm was generalized to more than two operators in [18]. The related algorithm in [1] introduced a technique for handling compositions of linear and monotone operators, and [8] proposed an extension to “blockiterative” and asynchronous operation — blockiterative operation meaning that only a subset of the operators making up the problem need to be considered at each iteration (this approach may be called “incremental” in the optimization literature). A restricted and simplified version of this framework appears in [16]. The asynchronous and blockiterative nature of projective splitting as well as its ability to handle composition with linear operators gives it an unprecedented level of flexibility compared with prior classes of operator splitting methods, none of which can be readily implemented in an asynchronous or blockiterative manner. Further, in the projective splitting methods of [8, 16] the order with which operators can be processed is deterministic, variable, and highly flexible. It is not necessary that each operator be processed the same number of times either exactly or approximately; in fact, one operator may be processed much more often than another. The only constraint is that there is an upper bound on the number of iterations between the consecutive times that each operator is processed.
Projective splitting algorithms work by performing separate calculations on each individual operator to construct a separating hyperplane between the current iterate and the problem’s
KuhnTucker set (essentially the set of primal and dual solutions), and then projecting onto the this hyperplane. In prior projective splitting algorithms, the only operation performed on the individual operators is a proximal (backward) step, which consists of evaluating the operator resolvents for some scalar . In this paper, we show how, for the Lipschitz continuous operators, the same kind of framework can also make use of forward steps on the individual operators, equivalent to applying . Typically, such “explicit” steps are computationally much easier than “implicit”, proximal steps. Our procedure requires two forward steps each time it evaluates an operator, and in this sense is reminiscent of Tseng’s forwardbackwardforward method [32] and Korpelevich’s extragradient method [22]. Indeed, for the special case of only one operator, projective splitting with the new procedure reduces to the variant of the extragradient method in [20]. Each stepsize must be bounded by the inverse of the Lipschitz constant of. However, a simple backtracking procedure can eliminate the need to estimate the Lipschitz constant, and other options are available for selecting the stepsize when
is affine.1.1 Intuition and contributions: basic idea
We first provide some intuition into our fundamental idea of incorporating forward steps into projective splitting. For simplicity, consider (1) without the linear operators , that is, we want to find such that , where are maximal monotone operators on a single real Hilbert space . We formulate the KuhnTucker solution set of this problem as
(3) 
It is clear that solves if and only if there exist such that . A separatorprojector algorithm for finding a point in will, at each iteration , find a closed and convex set which separates from the current point, meaning is entirely in the set and the current point is not. One can then move closer to the solution set by projecting the current point onto the set .
If we define as in (3), then the separator formulation presented in [8] constructs the set through the function
(4)  
(5) 
for some such that , . From its expression in (5) it is clear that is an affine function on . Furthermore, it may easily be verified that for any , one has , so that the separator set may be taken to be the halfspace . The key idea of projective splitting is, given a current iterate , to pick so that is positive if . Then, since the solution set is entirely on the other side of the hyperplane , projecting the current point onto this hyperplane makes progress toward the solution. If it can be shown that this progress is sufficiently large, then it is possible to prove (weak) convergence.
Let the iterates of such an algorithm be . To simplify the subsequent analysis, define at each iteration , whence it is immediate from (4) that . To construct a function of the form (4) such that whenever , it is sufficient to be able to perform the following calculation on each individual operator : for , find such that and , with if . In earlier work on projective splitting [17, 18, 8, 1], the calculation of such a is accomplished by a proximal (implicit) step on the operator : given a scalar , we find the unique pair such that and
(6) 
We immediately conclude that
(7) 
and furthermore that unless , which would in turn imply that and . If we perform such a calculation for each , we have constructed a separator of the form (4) which, in view of , has if . This basic calculation on is depicted in Figure 1(a) for : since , the line segment between and must have slope , meaning that and thus that . It also bears mentioning that the relation (7) plays (in generalized form) a key role in the convergence proof.
Consider now the case that is Lipschitz continuous with modulus (and hence single valued) and defined throughout . We now introduce a technique to accomplish something similar to the preceding calculation through two forward steps instead of a single backward step. We begin by evaluating and using this value in place of in the righthand equation in (6), yielding
(8) 
and we use this value for . This calculation is depicted by the lower left point in Figure 1(b). We then calculate , resulting in a pair on the graph of the operator; see the upper left point in Figure 1(b). For this choice of , we next observe that
(9)  
(10)  
(11) 
Here, (9) follows because from (8) and because we let . The inequality (10) then follows from the CauchySchwarz inequality and the hypothesized Lipschitz continuity of . If we require that , then we have and (11) therefore establishes that , with unless , which would imply that . We thus obtain a conclusion very similar to (7) and the results immediately following from it, but using the constant in place of the positive constant .
For , this process is depicted in Figure 1(b). By construction, the line segment between and has slope , which is “steeper” than the graph of the operator, which can have slope at most by Lipschitz continuity. This guarantees that the line segment between and must have negative slope, which in is equivalent to the claimed inner product property.
Using a backtracking line search, we will also be able to handle the situation in which the value of is unknown. If we choose any positive constant , then by elementary algebra the inequalities and are equivalent. Therefore, if we select some positive , we have from (11) that
(12) 
which implies the key properties we need for the convergence proofs. Therefore we may start with any , and repeatedly halve until (12) holds; in Section 4.1 below, we bound the number of halving steps required. In general, each trial value of requires one application of the Lipschitz continuous operator . However, for the case of affine operators , we will show that it is possible to compute a stepsize such that (12) holds with a total of only two applications of the operator. By contrast, most backtracking procedures in optimization algorithms require evaluating the objective function at each new candidate point, which in turn usually requires an additional matrix multiply operation in the quadratic case [3].
1.2 Summary of Contributions
The main thrust of the remainder of this paper is to incorporate the second, forwardstep construction of above into an algorithm resembling those of [8, 16], allowing some operators to use backward steps, and others to use forward steps. Thus, projective splitting may become useful in a broad range of applications in which computing forward steps is preferable to computing or approximating proximal steps. The resulting algorithm inherits the asynchronous and blockiterative features of [8, 16]. It is worth mentioning that the stepsize constraints are unaffected by asynchrony — increasing the delays involved in communicating information between parts of the algorithm does not require smaller stepsizes. This contrasts with other asynchronous optimization and operator splitting algorithms [28, 27]. Another useful feature of the stepsizes is that they are allowed to vary across operators and iterations.
Like previous asynchronous projective splitting methods [16, 8], the asynchronous method developed here does not rely on randomization, nor is the algorithm formulated in terms of some fixed communication graph topology.
We will work with a slight restriction of problem (1), namely
(13) 
In terms of problem (1), we are simply requiring that be the identity operator and thus that . This is not much of a restriction in practice, since one could redefine the last operator as , or one could simply append a new operator with everywhere.
The principle reason for adopting a formulation involving the linear operators is that in many applications of (13) it may be relatively easy to compute the proximal step of but difficult to compute the proximal step of . Our framework will include algorithms for (13) that may compute the proximal steps on , forward steps when is Lipschitz continuous, and applications (“matrix multiplies”) of and . An interesting feature of the forward steps in our method is that while the allowable stepsizes depend on the Lipschitz constants of the for , they do not depend on the linear operator norms , in contrast with primaldual methods [6, 12, 33]. Furthermore as mentioned the stepsizes used for each operator can be chosen independently and may vary by iteration.
We also suggest a greedy heuristic for selecting operators in blockiterative splitting, based on a simple proxy. Augmenting this heuristic with a straightforward safeguard allows one to retain all of the convergence properties of the main algorithm. The heuristic is not specifically tied to the use of forward steps and also applies to the earlier algorithms in
[8, 16]. The numerical experiments in Section 5 below attest to its usefulness.2 Mathematical Preliminaries
2.1 Notation
Summations of the form for some collection will appear throughout this paper. To deal with the case , we use the standard convention that To ease the mathematical presentation, we use the following notation throughout the rest of the paper:
(14) 
Note that when , . We will use a boldface for elements of .
Throughout, we will simply write as the norm for and let the subscript be inferred from the argument. In the same way, we will write as for the inner product of . For the collective primaldual space defined in Section 2.3 we will use a special norm and inner product with its own subscript.
For any maximal monotone operator we will use the notation for any scalar , to denote the proximal operator, also known as the backward or implicit step with respect to . This means that
The and satisfying this relation are unique. Furthermore, is defined everywhere and [2, Prop. 23.2].
We use the standard “” notation to denote weak convergence, which is of course equivalent to ordinary convergence in finitedimensional settings.
Proof.
where the inequality follows from the convexity of the function . ∎
2.2 A Generic Linear SeparatorProjection Method
Suppose that is a real Hilbert space with inner product and norm . A generic linear separatorprojection method for finding a point in some closed and convex set is given in Algorithm 1.
The update on line 1 is the relaxed projection of onto the halfspace using the norm . In other words, if is the projection onto this halfspace, then the update is . Note that we define the gradient with respect to the inner product , meaning we can write
We will use the following wellknown properties of algorithms fitting the template of Algorithm 1; see for example [7, 17]: Suppose is closed and convex. Then for Algorithm 1,

The sequence is bounded.

;

If all weak limit points of are in , then converges weakly to some point in .
Note that we have not specified how to choose the affine function . For our specific application of the separator projector framework, we will do so in Section 3.2.
2.3 Main Assumptions Regarding Problem (13)
Let and . Define the extended solution set or KuhnTucker set of (13) to be
(15) 
Clearly solves (13) if and only if there exists such that . Our main assumptions regarding (13) are as follows: Problem (13) conforms to the following:

and are real Hilbert spaces.

For , the operators are monotone.

For all in some subset , the operator is Lipschitz continuous (and thus singlevalued) and .

For , the operator is maximal and that the map can be computed to within the error tolerance specified below in Assumption 3.4 (however, these operators are not precluded from also being Lipschitz continuous).

Each for is linear and bounded.

The solution set defined in (15) is nonempty.
3 Our Algorithm and Convergence
3.1 Algorithm Definition
Algorithm 2 is our asynchronous blockiterative projective splitting algorithm with forward steps for solving (13). It is essentially a special case of the weakly convergent Algorithm of [8], except that we use the new forward step procedure to deal with the Lipschitz continuous operators for , instead of exclusively using proximal steps. For our separating hyperplane we use a special case of the formulation of [8], which is slightly different from the one used in [16]. Our method can be reformulated to use the same hyperplane as [16]; however, this requires that it be computationally feasible to project on the subspace given by the equation .
The algorithm has the following parameters:

For each iteration , a subset .

For each and , a positive scalar stepsize .

For each iteration and , a delayed iteration index which allows the subproblem calculations to use outdated information.

For each iteration , an overrelaxation parameter for some constants .

Sequences of errors for , allowing us to model inexact computation of the proximal steps.
There are many ways in which Algorithm 2 could be implemented in various parallel computing environments; a specific suggestion for asynchronous implementation of a closely related class of algorithms is developed in [16, Section 3]. One simple option is a centralized or “masterslave” implementation in which lines 22 and 22 are implemented on a collection of “worker” processors, while the remainder of the algorithm, most notably the coordination process embodied by lines 22, is executed by a single coordinating processor. However, such a simple implementation risks the coordinating processor becoming a serial bottleneck as the number of worker processors grows or the memory required to store the vectors for , becomes large, since the amount of work required to execute lines 22 is proportional to the total number of elements in . Fortunately, all but a constant amount of the work in the coordination calculations in lines 22 involves only sums, inner products, and matrix multiplies by and . Summation and hence inner product operations can be efficiently distributed over multiple processors. Therefore, with some care exercised as to where one performs the matrix multiplications in cases in which the are nontrivial, the coordination calculations may be distributed over multiple processors so that the coordination process need not constitute a serial bottleneck.
In the form directly presented in Algorithm 2, the delay indices may seem unmotivated; it might seem best to always select . However, these indices can play a critical role in modeling asynchronous parallel implementation. In the simple “masterslave” scheme described above, for example, the “master” might dispatch subproblems to worker processors, but not receive the results back immediately. In the meantime, other workers may report back results, which the master could incorporate into its projection calculations. In this context, counts the number of projection operations performed at the master, and is the set of subproblems whose solutions reached the master between iterations and . For each , is the index of the iteration completed just before subproblem was last dispatched for solution. In more sophisticated parallel implementation, and would have similar interpretations.
3.2 The Hyperplane
In this section, we define the affine function our algorithm uses to construct a separating hyperplane. Let be a generic point in , the collective primaldual space. For , we adopt the following norm and inner product for some :
(16) 
Define the following function generalizing (4) at each iteration :
(17) 
where the are chosen so that for (recall that each inner product is for the corresponding Hilbert space ). This function is a special case of the separator function used in [8]. The following lemma proves some basic properties of ; similar results are in [1, 8, 16] in the case .
Let be defined as in (17). Then:

is affine on .

With respect to inner product on , the gradient of is

If Assumption 15 holds, for , and , then
Proof.
To see that is affine, rewrite (17) as
(18) 
It is now clear that is an affine function of . Next, fix an arbitrary . Using the fact that is affine, we may write
Comments
There are no comments yet.