A Constant Step Stochastic Douglas-Rachford Algorithm with Application to Non Separable Regularizations

04/03/2018
by   Adil Salim, et al.
0

The Douglas Rachford algorithm is an algorithm that converges to a minimizer of a sum of two convex functions. The algorithm consists in fixed point iterations involving computations of the proximity operators of the two functions separately. The paper investigates a stochastic version of the algorithm where both functions are random and the step size is constant. We establish that the iterates of the algorithm stay close to the set of solution with high probability when the step size is small enough. Application to structured regularization is considered.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/12/2017

Linear Stochastic Approximation: Constant Step-Size and Iterate Averaging

We consider d-dimensional linear stochastic approximation algorithms (LS...
09/15/2018

Completely Uncoupled Algorithms for Network Utility Maximization

In this paper, we present two completely uncoupled algorithms for utilit...
01/22/2020

Chirotopes of Random Points in Space are Realizable on a Small Integer Grid

We prove that with high probability, a uniform sample of n points in a c...
09/18/2019

Fixed Point Analysis of Douglas-Rachford Splitting for Ptychography and Phase Retrieval

Douglas-Rachford Splitting (DRS) methods based on the proximal point alg...
05/18/2020

Convergence of constant step stochastic gradient descent for non-smooth non-convex functions

This paper studies the asymptotic behavior of the constant step Stochast...
11/05/2018

Non-ergodic Convergence Analysis of Heavy-Ball Algorithms

In this paper, we revisit the convergence of the Heavy-ball method, and ...
03/18/2013

Margins, Shrinkage, and Boosting

This manuscript shows that AdaBoost and its immediate variants can produ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many applications in the fields of machine learning 

[1] and signal processing [2] require the solution of the programming problem

(1)

where is an Euclidean space, and are elements of the set of convex, lower semi-continuous and proper functions. In these contexts, often represents a cost function and a regularization term. The Douglas-Rachford algorithm is one of the most popular approach towards solving Problem (1). Given , the algorithm is written

(2)

where denotes the proximity operator of , defined for every by the equation

Assuming that a standard qualification condition holds and that the set of solutions of (1) is not empty, the sequence converges to an element in as ([3, 4]).

In this paper, we study the case where and are integral functionals of the form

where

is a random variable (r.v) from some probability space

into a measurable space , with distribution , and where and are subsets of . In this context, the stochastic Douglas Rachford algorithm aims to solve Problem (1) by iterating

(3)

where is a sequence of i.i.d copies of the random variable and is the constant step size. Compared to the "deterministic" Douglas Rachford algorithm (1), the stochastic Douglas Rachford algorithm (1) is an online method. The constant step size used make it implementable in adaptive signal processing or online machine learning contexts. In this algorithm, the function (resp. ) is replaced at each iteration by a random realization (resp. ). It can be implemented in the case where (resp. ) cannot be computed in its closed form [5, 6] or in the case where the computation of its proximity operator is demanding [7]. Compared to other online optimization algorithm like the stochastic subgradient algorithm, the algorithm (1) benefits from the numerical stability of stochastic proximal methods.

Stochastic version of the Douglas Rachford algorithm have been considered in [2, 8]. These papers consider the case where is deterministic, i.e is not written as an expectation and is written as an expectation that reduces to a sum. The latter case is also contained as a particular case of the algorithm [9]. The algorithms [10, 11] are generalizations of a partially stochastic Douglas Rachford algorithm where is deterministic. The convergence of these algorithms is obtained under a summability assumption of the noise over the iterations. The stochastic Douglas Rachford studied in this paper was implemented in an adaptive signal processing context [12] to solve a target tracking problem.

Whereas the paper [12] is mainly focused on the application to target tracking, in this work we provide theoretical basis for the algorithm (1) and convergence results. Moreover, a novel application to solve a programming problem regularized with the overlapping group lasso online is provided.

The next section introduces some notations. Section 3 is devoted to the statement of the main convergence result. In Section 4, an outline of the proof of the result in Section 3 is provided. Finally, the algorithm (1) is implemented to solve a regularized problem in Section 5.

2 Notations

For every function , denotes the subdifferential of at the point and the least norm element in . The domain of is denoted as . It is a known fact that the closure of , denoted as , is convex. For every closed convex set , we denote by the projection operator onto . The indicator function of the set is defined by if , and elsewhere. It is easy to see that and that .

The Moreau envelope of is equal to

for every . Recall that is differentiable and If is differentiable, then, and , for every .

When , denote the distance from the point to the set . In the context of algorithm (1) we shall denote and . Denote the Borel sigma field over . For every , is the set of all r.v from the probability space into the measurable space , such that is integrable.

From now on, we shall state explicitly the dependence of the iterates of the algorithm in the step size and the starting point. Namely, we shall denote the sequence generated by the stochastic Douglas Rachford algorithm (1) with step , such that the distribution of over is . If , where is the Dirac measure at the point , we shall prefer the notation .

3 Main convergence theorem

Consider the following assumptions.

Assumption 1.

For every compact set , there exists such that

Assumption 2.

For -a.e , is differentiable and there exists a closed ball in such that for all in this ball, where is -integrable. Moreover, for every compact set , there exists such that

Assumption 3.

.

Assumption 4.

For every compact set , there exists such that for all and all ,

Assumption 5.

There exists such that is -a.e, a -Lipschitz continuous function.

Assumption 6.

There exists and such that -a.s, and .

Assumption 7.

The function satisfies one of the following properties:

  1. [label=()]

  2. is coercive i.e

  3. is supercoercive i.e .

Assumption 8.

There exists , such that for all and all ,

Theorem 1.

Let Assumptions 1– 8 hold true. Then, for each probability measure over

having a finite second moment, for any

,

Moreover, if Assumption 72 holds true, then

where .

Loosely speaking, the theorem states that, with high probability, the iterates stay close to the set of solutions as and .

Some Assumptions deserve comments.

Following [13], we say that a finite collection of subsets of is linearly regular if

In the case where there exists a -probability one set such that the set is finite, it is routine to check that Assumption 3 holds if and only if the domains are linearly regular. See [12] for an applicative context of the algorithm (1) in the latter case.

It is a known fact that

for each . Assumptions 4 and 8 add controls on the convergence rate.

Since , and is differentiable, [14], where the set is defined by its Aumann integral

Therefore, using Fermat’s rule, if , then there exists , such that -a.s, and . We refer to as a representation of the solution . Assumption 6 ensures the existence of with a representation .

4 Outline of the convergence proof

This section is devoted to sketching the proof of the convergence of the stochastic Douglas Rachford algorithm. The approach follows the same steps as [6] and is detailed in [15]. The first step of the proof is to study the dynamical behavior of the iterates where

. The Ordinary Differential Equation (ODE) method, well known in the literature of stochastic approximation (

[16]), is applied. Consider the continuous time stochastic process

obtained by linearly interpolating with time interval

the iterates :

(4)

for all such that , for all . Let Assumptions 14 111In the case where the domains are common, i.e is -a.s constant, the moment Assumptions 1 and 2 are sufficient to state the dynamical behavior result. See [12] for an applicative context where the domains are distinct. hold true. Consider the set of continuous functions from to equipped with the topology of uniform convergence on the compact intervals. It is shown that the continuous time stochastic process converges weakly over (i.e in distribution in ) as . Moreover, the limit is proven to be the unique absolutely continuous function over satisfying and for almost every , the Differential Inclusion (DI),

(5)

(see [17]). Differential inclusions like (5) generalize ODE to set-valued mappings. The DI (5) induces a map that can be extended to a semi-flow over , still denoted by .

The weak convergence of to is not enough to study the long term behavior of the iterates

. The second step of the proof is to prove a stability result for the Feller Markov chain

. Denote by its transition kernel. The deterministic counterpart of this step of the proof is the so-called Fejér monotonicity of the sequence of the algorithm (1). Even if some work has been done [5, 18], there is no immediate way to adapt the Fejér monotonicity to our random setting, mainly because of the constant step . As an alternative, we assume Hypotheses 5-6, and prove the existence of positive numbers and , such that for every ,

(6)

In this inequality, denotes the conditional expectation with respect to the sigma-algebra and

Since is decreasing [6, 15], the function can be replaced by . Besides, the coercivity of (Assumption 7) implies the coercivity of [6, 15]). Therefore, assuming 57 and setting , there exist positive numbers and , such that for every ,

(7)

Equation (7) can alternatively be seen as a tightness result. It implies that the set of invariant measures of the Markov kernel is not empty for every , and that the set

(8)

is tight[19, 20]).

It remains to characterize the cluster points of Inv as . To that end, the dynamical behavior result and the stability result are combined. Let Assumptions 1– 8 hold true. 222Assumptions 34 and 8 are not needed if the domains are common. Then, the set Inv is tight, and, as , every cluster point of Inv is an invariant measure for the semi-flow . The Theorem 1 is a consequence of this fact.

5 Application to structured regularization

In this section is provided an application of the stochastic Douglas Rachford (1) algorithm to solve a regularized optimization problem. Consider problem (1), where is a cost function that is written as an expectation, and is a regularization term. Towards solving (1), many approaches involve the computation of the proximity operator of the regularization term . In the case where is a structured regularization term, its proximity operator is often difficult to compute. When is a graph-based regularization, it is possible to apply a stochastic proximal method to address the regularization [7]. We shall concentrate on the case where is an overlapping group regularization. In this case, the computation of the proximity operator of is known to be a bottleneck [21]. We shall apply the algorithm (1) to overcome this difficulty.

Consider , , and . Consider subsets of , , possibly overlapping. Set , where denotes the restriction of to the set of index and denotes the Euclidean norm. Set where denotes the hinge loss and is a r.v defined on some probability space with values in . In this case, the problem (1) is also called the SVM classification problem, regularized by the overlapping group lasso. It is assumed that the user is provided with i.i.d copies of the r.v online.

To solve this problem, we implement a stochastic Douglas Rachford strategy. To that end, the regularization is rewritten where is an uniform r.v over . At each iteration of the stochastic Douglas Rachford algorithm, the user is provided with the realization and sample a group uniformly in . Then, a Douglas Rachford step is done, involving the computation of the proximity operators of the functions and .

This strategy is compared with a partially stochastic Douglas Rachford algorithm, deterministic in the regularization , where the fast subroutine Fog-Lasso [21] is used to compute the proximity operator of the regularization . At each iteration , the user is provided with . Then, a Douglas Rachford step is done, involving the computation of the proximity operators of the functions and . Figure 1 demonstrates the advantage of treating the regularization term in a stochastic way.

Figure 1: The objective function as a function of time in seconds for each algorithm
Figure 2: Histogram of the Initialization and the last iterates of the Stochastic D-R (S. D-R) and the partially stochastic D-R (Part. S. D-R)

In Figure 1 "Stochastic D-R" denotes the stochastic Douglas Rachford algorithm and "Partially stochastic D-R" denotes the partially stochastic Douglas Rachford where the subroutine FoG-Lasso [21] is used at each iteration to compute the true proximity operator of the regularization . Figure 2 shows the appearance of the first and the last iterates. Even if a best performing procedure [21] is used to compute , we observe on Figure 1 that Stochastic D-R takes advantage of being a stochastic method. This advantage is known to be twofold ([22]). First, the iteration complexity of Stochastic D-R is moderate because is never computed. Then, Stochastic D-R is faster than its partially deterministic counterpart which uses Fog-Lasso [21] as a subroutine, especially in the first iterations of the algorithms. Moreover, Stochastic D-R seems to perform globally better. This is because every proximity operators in Stochastic D-R can be efficiently computed ([23]). Contrary to the proximity operator of  [21], the proximity operator of is easily computable. The proximity operator of is easily computable as well.333Even if

(logistic regression), the proximity operator of

is easily computable, see [2].

Références