1 Introduction
Many applications in the fields of machine learning
[1] and signal processing [2] require the solution of the programming problem(1) |
where is an Euclidean space, and are elements of the set of convex, lower semi-continuous and proper functions. In these contexts, often represents a cost function and a regularization term. The Douglas-Rachford algorithm is one of the most popular approach towards solving Problem (1). Given , the algorithm is written
(2) |
where denotes the proximity operator of , defined for every by the equation
Assuming that a standard qualification condition holds and that the set of solutions of (1) is not empty, the sequence converges to an element in as ([3, 4]).
In this paper, we study the case where and are integral functionals of the form
where
is a random variable (r.v) from some probability space
into a measurable space , with distribution , and where and are subsets of . In this context, the stochastic Douglas Rachford algorithm aims to solve Problem (1) by iterating(3) |
where is a sequence of i.i.d copies of the random variable and is the constant step size. Compared to the "deterministic" Douglas Rachford algorithm (1), the stochastic Douglas Rachford algorithm (1) is an online method. The constant step size used make it implementable in adaptive signal processing or online machine learning contexts. In this algorithm, the function (resp. ) is replaced at each iteration by a random realization (resp. ). It can be implemented in the case where (resp. ) cannot be computed in its closed form [5, 6] or in the case where the computation of its proximity operator is demanding [7]. Compared to other online optimization algorithm like the stochastic subgradient algorithm, the algorithm (1) benefits from the numerical stability of stochastic proximal methods.
Stochastic version of the Douglas Rachford algorithm have been considered in [2, 8]. These papers consider the case where is deterministic, i.e is not written as an expectation and is written as an expectation that reduces to a sum. The latter case is also contained as a particular case of the algorithm [9]. The algorithms [10, 11] are generalizations of a partially stochastic Douglas Rachford algorithm where is deterministic. The convergence of these algorithms is obtained under a summability assumption of the noise over the iterations. The stochastic Douglas Rachford studied in this paper was implemented in an adaptive signal processing context [12] to solve a target tracking problem.
2 Notations
For every function , denotes the subdifferential of at the point and the least norm element in . The domain of is denoted as . It is a known fact that the closure of , denoted as , is convex. For every closed convex set , we denote by the projection operator onto . The indicator function of the set is defined by if , and elsewhere. It is easy to see that and that .
The Moreau envelope of is equal to
for every . Recall that is differentiable and If is differentiable, then, and , for every .
When , denote the distance from the point to the set . In the context of algorithm (1) we shall denote and . Denote the Borel sigma field over . For every , is the set of all r.v from the probability space into the measurable space , such that is integrable.
From now on, we shall state explicitly the dependence of the iterates of the algorithm in the step size and the starting point. Namely, we shall denote the sequence generated by the stochastic Douglas Rachford algorithm (1) with step , such that the distribution of over is . If , where is the Dirac measure at the point , we shall prefer the notation .
3 Main convergence theorem
Consider the following assumptions.
Assumption 1.
For every compact set , there exists such that
Assumption 2.
For -a.e , is differentiable and there exists a closed ball in such that for all in this ball, where is -integrable. Moreover, for every compact set , there exists such that
Assumption 3.
.
Assumption 4.
For every compact set , there exists such that for all and all ,
Assumption 5.
There exists such that is -a.e, a -Lipschitz continuous function.
Assumption 6.
There exists and such that -a.s, and .
Assumption 7.
The function satisfies one of the following properties:
-
[label=()]
-
is coercive i.e
-
is supercoercive i.e .
Assumption 8.
There exists , such that for all and all ,
Theorem 1.
Loosely speaking, the theorem states that, with high probability, the iterates stay close to the set of solutions as and .
Some Assumptions deserve comments.
Following [13], we say that a finite collection of subsets of is linearly regular if
In the case where there exists a -probability one set such that the set is finite, it is routine to check that Assumption 3 holds if and only if the domains are linearly regular. See [12] for an applicative context of the algorithm (1) in the latter case.
4 Outline of the convergence proof
This section is devoted to sketching the proof of the convergence of the stochastic Douglas Rachford algorithm. The approach follows the same steps as [6] and is detailed in [15]. The first step of the proof is to study the dynamical behavior of the iterates where
. The Ordinary Differential Equation (ODE) method, well known in the literature of stochastic approximation (
[16]), is applied. Consider the continuous time stochastic processobtained by linearly interpolating with time interval
the iterates :(4) |
for all such that , for all . Let Assumptions 1–4 111In the case where the domains are common, i.e is -a.s constant, the moment Assumptions 1 and 2 are sufficient to state the dynamical behavior result. See [12] for an applicative context where the domains are distinct. hold true. Consider the set of continuous functions from to equipped with the topology of uniform convergence on the compact intervals. It is shown that the continuous time stochastic process converges weakly over (i.e in distribution in ) as . Moreover, the limit is proven to be the unique absolutely continuous function over satisfying and for almost every , the Differential Inclusion (DI),
(5) |
(see [17]). Differential inclusions like (5) generalize ODE to set-valued mappings. The DI (5) induces a map that can be extended to a semi-flow over , still denoted by .
The weak convergence of to is not enough to study the long term behavior of the iterates
. The second step of the proof is to prove a stability result for the Feller Markov chain
. Denote by its transition kernel. The deterministic counterpart of this step of the proof is the so-called Fejér monotonicity of the sequence of the algorithm (1). Even if some work has been done [5, 18], there is no immediate way to adapt the Fejér monotonicity to our random setting, mainly because of the constant step . As an alternative, we assume Hypotheses 5-6, and prove the existence of positive numbers and , such that for every ,(6) | ||||
In this inequality, denotes the conditional expectation with respect to the sigma-algebra and
Since is decreasing [6, 15], the function can be replaced by . Besides, the coercivity of (Assumption 7) implies the coercivity of ( [6, 15]). Therefore, assuming 5–7 and setting , there exist positive numbers and , such that for every ,
(7) |
Equation (7) can alternatively be seen as a tightness result. It implies that the set of invariant measures of the Markov kernel is not empty for every , and that the set
(8) |
It remains to characterize the cluster points of Inv as . To that end, the dynamical behavior result and the stability result are combined. Let Assumptions 1– 8 hold true. 222Assumptions 3, 4 and 8 are not needed if the domains are common. Then, the set Inv is tight, and, as , every cluster point of Inv is an invariant measure for the semi-flow . The Theorem 1 is a consequence of this fact.
5 Application to structured regularization
In this section is provided an application of the stochastic Douglas Rachford (1) algorithm to solve a regularized optimization problem. Consider problem (1), where is a cost function that is written as an expectation, and is a regularization term. Towards solving (1), many approaches involve the computation of the proximity operator of the regularization term . In the case where is a structured regularization term, its proximity operator is often difficult to compute. When is a graph-based regularization, it is possible to apply a stochastic proximal method to address the regularization [7]. We shall concentrate on the case where is an overlapping group regularization. In this case, the computation of the proximity operator of is known to be a bottleneck [21]. We shall apply the algorithm (1) to overcome this difficulty.
Consider , , and . Consider subsets of , , possibly overlapping. Set , where denotes the restriction of to the set of index and denotes the Euclidean norm. Set where denotes the hinge loss and is a r.v defined on some probability space with values in . In this case, the problem (1) is also called the SVM classification problem, regularized by the overlapping group lasso. It is assumed that the user is provided with i.i.d copies of the r.v online.
To solve this problem, we implement a stochastic Douglas Rachford strategy. To that end, the regularization is rewritten where is an uniform r.v over . At each iteration of the stochastic Douglas Rachford algorithm, the user is provided with the realization and sample a group uniformly in . Then, a Douglas Rachford step is done, involving the computation of the proximity operators of the functions and .
This strategy is compared with a partially stochastic Douglas Rachford algorithm, deterministic in the regularization , where the fast subroutine Fog-Lasso [21] is used to compute the proximity operator of the regularization . At each iteration , the user is provided with . Then, a Douglas Rachford step is done, involving the computation of the proximity operators of the functions and . Figure 1 demonstrates the advantage of treating the regularization term in a stochastic way.


In Figure 1 "Stochastic D-R" denotes the stochastic Douglas Rachford algorithm and "Partially stochastic D-R" denotes the partially stochastic Douglas Rachford where the subroutine FoG-Lasso [21] is used at each iteration to compute the true proximity operator of the regularization . Figure 2 shows the appearance of the first and the last iterates. Even if a best performing procedure [21] is used to compute , we observe on Figure 1 that Stochastic D-R takes advantage of being a stochastic method. This advantage is known to be twofold ([22]). First, the iteration complexity of Stochastic D-R is
moderate because is never computed. Then, Stochastic D-R is faster than its partially deterministic counterpart which uses Fog-Lasso [21] as a subroutine, especially in the first
iterations of the algorithms.
Moreover, Stochastic D-R seems to perform globally better. This
is because every proximity
operators in Stochastic D-R can be efficiently computed ([23]). Contrary to the proximity operator of [21], the proximity operator of is easily computable. The proximity operator of is easily computable as well.333Even if (logistic regression), the proximity operator of
Références
- [1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
- [2] G. Chierchia, A. Cherni, E. Chouzenoux, and J.-C. Pesquet, “Approche de douglas-rachford aléatoire par blocs appliquée à la régression logistique parcimonieuse,” in GRETSI, 2017.
- [3] P.-L. Lions and B. Mercier, “Splitting algorithms for the sum of two nonlinear operators,” SIAM J. Numer. Anal., vol. 16, no. 6, pp. 964–979, 1979.
- [4] J. Eckstein and D. P. Bertsekas, “On the douglas—rachford splitting method and the proximal point algorithm for maximal monotone operators,” Mathematical Programming, vol. 55, no. 1, pp. 293–318, 1992.
- [5] P. Bianchi and W. Hachem, “Dynamical behavior of a stochastic Forward-Backward algorithm using random monotone operators,” ArXiv e-prints, 1508.02845, Aug. 2015, To be published in J. Optim. Theory Appl.
- [6] P. Bianchi, W. Hachem, and A. Salim, “A constant step Forward-Backward algorithm involving random maximal monotone operators,” arXiv preprint arXiv:1702.04144, 2017.
- [7] A. Salim, P. Bianchi, and W. Hachem, “Snake: a stochastic proximal gradient algorithm for regularized problems over large graphs,” in preparation, 2017.
- [8] Z. Shi and R. Liu, “Online and stochastic douglas-rachford splitting method for large scale machine learning,” arXiv preprint arXiv:1308.4757, 2013.
- [9] A. Chambolle, M. J Ehrhardt, P. Richtárik, and C.-B. Schönlieb, “Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging application,” arXiv preprint arXiv:1706.04957, 2017.
- [10] L. Rosasco, S. Villa, and B.-C. Vu, “Stochastic inertial primal-dual algorithms,” arXiv preprint arXiv:1507.00852, 2015.
- [11] P.-.L Combettes and J.-C. Pesquet, “Stochastic forward-backward and primal-dual approximation algorithms with application to online image restoration,” in Signal Processing Conference (EUSIPCO), 2016 24th European. IEEE, 2016, pp. 1813–1817.
- [12] R. Mourya, P. Bianchi, A. Salim, and C. Richard, “An adaptive distributed asynchronous algorithm with application to target localization,” in CAMSAP, 2017.
- [13] H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM review, vol. 38, no. 3, pp. 367–426, 1996.
- [14] R. T. Rockafellar and R. J.-B. Wets, Variational analysis, vol. 317 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer-Verlag, Berlin, 1998.
- [15] A. Salim, P. Bianchi, and W. Hachem, “A stochastic douglas-rachford algorithm with constant step size,” Tech. Rep., see https://adil-salim.github.io/Research, 2017.
- [16] H. J. Kushner and G. G. Yin, Stochastic approximation and recursive algorithms and applications, vol. 35 of Applications of Mathematics (New York), Springer-Verlag, New York, second edition, 2003, Stochastic Modelling and Applied Probability.
- [17] H. Brézis, Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert, North-Holland mathematics studies. Elsevier Science, Burlington, MA, 1973.
- [18] P.-L. Combettes and J.-C. Pesquet, “Stochastic quasi-fejér block-coordinate fixed point iterations with random sweeping,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 1221–1248, 2015.
- [19] J.-C. Fort and G. Pagès, “Asymptotic behavior of a Markovian stochastic algorithm with constant step,” SIAM J. Control Optim., vol. 37, no. 5, pp. 1456–1482 (electronic), 1999.
- [20] P. Bianchi, W. Hachem, and A. Salim, “Asymptotics of constant step stochastic approximations involving differential inclusions,” arXiv preprint arXiv:1612.03831, 2016.
- [21] L. Yuan, J. Liu, and J. Ye, “Efficient methods for overlapping group lasso,” in Advances in NIPS, 2011, pp. 352–360.
- [22] L. Bottou, F. E Curtis, and J. Nocedal, “Optimization methods for large-scale machine learning,” arXiv preprint arXiv:1606.04838, 2016.
- [23] H. H. Bauschke and P. L. Combettes, Convex analysis and monotone operator theory in Hilbert spaces, CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, New York, 2011.
Comments
There are no comments yet.