# Non-stationary Douglas-Rachford and alternating direction method of multipliers: adaptive stepsizes and convergence

We revisit the classical Douglas-Rachford (DR) method for finding a zero of the sum of two maximal monotone operators. Since the practical performance of the DR method crucially depends on the stepsizes, we aim at developing an adaptive stepsize rule. To that end, we take a closer look at a linear case of the problem and use our findings to develop a stepsize strategy that eliminates the need for stepsize tuning. We analyze a general non-stationary DR scheme and prove its convergence for a convergent sequence of stepsizes with summable increments. This, in turn, proves the convergence of the method with the new adaptive stepsize rule. We also derive the related non-stationary alternating direction method of multipliers (ADMM) from such a non-stationary DR method. We illustrate the efficiency of the proposed methods on several numerical examples.

## Authors

• 7 publications
• 22 publications
• ### Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems

In this paper, we consider solving a class of nonconvex and nonsmooth pr...
09/01/2017 ∙ by Tao Sun, et al. ∙ 0

Many modern computer vision and machine learning applications rely on so...
04/10/2017 ∙ by Zheng Xu, et al. ∙ 0

• ### Global convergence of splitting methods for nonconvex composite optimization

We consider the problem of minimizing the sum of a smooth function h wit...
07/03/2014 ∙ by Guoyin Li, et al. ∙ 0

• ### Convex hull algorithms based on some variational models

Seeking the convex hull of an object is a very fundamental problem arisi...
08/09/2019 ∙ by Lingfeng Li, et al. ∙ 8

• ### Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments

A compact information-rich representation of the environment, also calle...
01/17/2017 ∙ by Varun Raj Kompella, et al. ∙ 0

• ### An Efficient ADMM Algorithm for Structural Break Detection in Multivariate Time Series

We present an efficient alternating direction method of multipliers (ADM...
11/22/2017 ∙ by Alex Tank, et al. ∙ 0

• ### Truthful and Faithful Monetary Policy for a Stablecoin Conducted by a Decentralised, Encrypted Artificial Intelligence

The Holy Grail of a decentralised stablecoin is achieved on rigorous mat...
09/16/2019 ∙ by David Cerezo Sánchez, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In this paper we consider the Douglas-Rachford (DR) method to solve the problem of finding a zero of the sum of two maximal monotone operators, i.e., solving:

 0∈(A+B)x, (1)

where are two (possibly multivalued) maximal monotone operators from a Hilbert space into itself [35].

The DR method originated from [15] and was initially proposed to solve the discretization of stationary and non-stationary heat equations where the involved monotone operators are linear (namely, the discretization of second derivatives in different spatial directions, for example and ). The iteration uses resolvents ( is the identity map) and , and from the original paper [15, Eq. (7.4), (7.5)] one can extract the iteration

 un+1:=JtB(JtA((I−tB)un)+tBun), (2)

where is a given stepsize. This iteration scheme also makes sense for general maximal monotone operators as soon as is single-valued. It has been observed in [29] that the iteration can be rewritten for arbitrary maximally monotone operators by substituting and using the identity

 tBJtBy=tB(I+tB)−1y=y−(I+tB)−1y=y−JtBy (3)

to get

 yn+1:=yn+JtA(2JtByn−yn)−JtByn. (4)

Comparing (2) and (4), we see that (4) does not require to evaluate , which avoids assuming that is single-valued as in (2). Otherwise, is not uniquely defined. While in (2) converges to a solution of (1), in (4) is just an intermediate sequence converging to such that is a solution of (1). Therefore, (2) gives us a convenient form to study the DR method in the framework of fixed-point theory. Note that the iterations (2) and (4) are equivalent in the stationary case, but they are not equivalent in the non-stationary case, i.e., when the stepsize varies along the iterations; we will shed more light on this later in Section 2.2.

From a practical point of view, the performance of a DR scheme mainly depends on the following two aspects:

• Good stepsize : It seems to be generally acknowledged that the choice of the stepsize is crucial for the algorithmic performance of the method but a general rule to choose the stepsize seems to be missing [2, 16]. So far, convergence theory of DR methods provides some theoretical guidance to select the parameter in a given range in order to guarantee convergence of the method. Such a choice is often globally fixed for all iterations, and does not take into account local structures of the underlying operators and . Moreover, the global convergence rate of the DR method is known to be under only monotonicity assumption, but often using an averaging sequence [13, 14, 24], where is the iteration counter. Several experiments have shown that DR methods have better practical rate than its theoretical bound [33] by using the last iterate (i.e. not an averaging sequence). In the special case of convex optimization problems, the Douglas-Rachford method is equivalent to the alternating direction methods of multipliers (ADMM) (see, e.g. the recent [21] for a short historical account) and there a several proposals for dynamic stepsizes for ADMM [25, 28, 37, 41] but we are not aware of a method that applies to DR in the case of monotone operators. The recent work [30] provides explicit choices for constant stepsizes in cases where the monotone operator posses further properties.

• Proper metric: Since the DR method is not invariant as the Newton method, the choice of metric and preconditioning seems to be crucial to accelerate its performance. Some researchers have been studying this aspect from different views, see, e.g., [10, 8, 19, 20, 23, 34]. Clearly, the choice of a metric and a preconditioner also affects the choice of the stepsize.

Note that a metric choice often depends on the variant of methods, while the choice of stepsize depends on the problem structures such as the strongly monotonicity parameters and the Lipschitz constants [30]. In general cases, where and are only monotone, we only have a general rule to select the parameter to obtain its sublinear convergence rate [14, 16, 24]. This stepsize depends on the mentioned global parameters only and does not adequately capture the local structure of and to adaptively update . For instance, a linesearch procedure to evaluate a local Lipschitz constant for computing stepsize in first-order methods can beat the optimal stepsize using global Lipschitz constant [5], or a Barzilai-Borwein stepsize in gradient descent methods essentially exploits local curvature of the objective function to obtain a good performance.

Our contribution:

We prove the convergence of a new version of the non-stationary Douglas-Rachford method for the case where both operators are merely maximally monotone. Moreover, we propose a very simple adaptive step-size rule and demonstrate that this rule does improve convergence in practical situations. We also transfer our results to the case of ADMM and obtain a new adaptive rule that outperforms previously known adaptive ADMM methods and also does have a convergence guarantee. Our step-size rule is relatively simple and does not incur significantly computational effort rather than the norm of two vectors. Our stepsize rule has a theoretical convergence guarantee.

Paper organization: We begin with an analysis of the case of linear monotone operators in section 2, analyze the convergence of the non-stationary form of the iteration (2), i.e. the form where varies with , in section 3, and then propose adaptive stepsize rules in section 4. Section 5 extends the analysis to non-stationary ADMM. Finally, section 6 provides several numerical experiments for the DR scheme and ADMM using our new stepsize rule.

### 1.1 State of the art

There are several results on the convergence of the iteration (4). The seminal paper [29] showed that, for any positive stepsize , the iteration map in (4) is firmly nonexpansive, that the sequence weakly converges to a fixed point of the iteration map [29, Prop. 2] and that, weakly converges to a solution of the inclusion (1) as soon as , and are maximal monotone [29, Theorem 1]. In the case where is coercive and Lipschitz continuous, linear convergence was also shown in [29, Proposition 4]. These results have been extended in various ways. Let us attempt to summarize some key contributions on the DR method. Eckstein and Bertsekas in [16] showed that the DR scheme can be cast into a special case of the proximal point method [35]. This allows the authors to exploit inexact computation from the proximal point method [35]. They also presented a connection between the DR method and the alternating direction method of multipliers (ADMM). In [39] Svaiter proved a weak convergence of the DR method in Hilbert space without the assumption that is maximal monotone and the prove have been simplified in [38]. Combettes [11] cast the DR method as special case of the averaging operator from a fixed-point framework. Applications of the DR method have been studied in [12]. The convergence rate of the DR method was first studied in [29] for the strongly monotone case, while the sublinear rate was then proved in [24]. A more intensive research on convergence rates of the DR methods can be found in [13, 14, 30, 31]. The DR method has been extended to accelerated variant in [33] but specifying for a special setting. In [27] the authors analyzed a non-stationary DR method derived from (4) in the framework of perturbations of non-expansive iterations and showed convergence for convergent stepsize sequences with summable errors.

The DR method together with its dual variant, ADMM, become extremely popular in recent years due to a wide range of applications in image processing, and machine learning

[6, 26], which are unnecessary to recall them all here.

In terms of stepsize selection for DR schemes as well as for ADMM methods, it seems that there is very little work available from the literature. Some general rules for fixed stepsizes based on further properties of the operators such as strong monotonicity, Lipschitz continuity, and coercivity are given in [18, 30]

, and it is shown that the resulting linear rates are tight. Heuristic rules for fixed stepsizes motivated by quadratic problems are derived in

[20]. A self-adaptive stepsize for ADMM proposed in [25] seems to be one of the first work in this direction. The recent works [41, 42]

also proposed an adaptive update rule for stepsize in ADMM based on a spectral estimation. Some other papers rely on theoretical analysis to choose optimal stepsize such as

[17], but it only works in the quadratic case. In [28], the authors proposed a nonincreasing adaptive rule for the penalty parameter in ADMM. Another update rule for ADMM can be found in [37]. While ADMM is a dual variant of the DR scheme, we unfortunately have not seen any work that converts such an adaptive step-size from ADMM to the DR scheme where the more general case of monotone operators can be handled. In addition, the adaptive step-size for the DR scheme by itself seems to not exist in the literature.

### 1.2 A motivating linear example

While the Douglas-Rachford iteration (weakly) converges for all positive stepsizes , it seems to be folk wisdom, that there is a “sweet spot” for the stepsize which leads to fast convergence. We illustrate this effect with a simple linear example. We consider a linear equation

 0=Ax+Bx, (5)

where are two matrices of the size with . We choose symmetric positive definite matrices with and such that has full rank, and thus the equation has zero as its unique solution.111The exact construction of and is and , where and

are drawn from the standard Gaussian distribution in Matlab.

Since is single-valued, we directly use the iteration (2).

###### Remark 1.1.

Note that the shift would allow to treat inhomogeneous equation . If is a solution of this equation, then one sees that iteration (2) applied to is equivalent to applying the iteration to but for the residual .

We ran the DR scheme (2) for a given range of different values of , and show the residuals in semi-log-scale on the left of Figure 1. One observes the following typical behavior for this example:

• A not too small stepsize ( in this case) leads to good progress in the beginning, but slows down considerably in the end.

• Large stepsizes (larger than in this case) are slower in the beginning and tend to produce non-monotone decrease of the error.

• Somewhere in the middle, there is a stepsize which performs much better than the small and large stepsizes.

In this particular example the stepsize greatly outperforms the other stepsizes. On the right of Figure 1 we show the norm of the residual after a fixed number of iterations for varying stepsizes. One can see that there is indeed a sweet spot for the stepsizes around . Note that the value of is by no means universal and this sweet spot of varies with the problem size, with the ranks of and , and even for each particular instance of this linear example.

## 2 Analysis of the linear monotone inclusion

In order to develop an adaptive stepsize for our non-stationary DR method, we first consider the linear problem instance of (1). We consider the original DR scheme (2) instead of (4) since (2) generates the sequence which converges to a solution of (1), while the sequence computed by (4) does not converge to a solution and its limit does depend on the stepsize in general.

### 2.1 The construction of adaptive stepsize for single-valued operator B

When both and are linear, the DR scheme (2) can be expressed as a fixed-point iteration scheme of the following mapping:

 Ht:=JtB(JtA(I−tB)+tB)=(I+tB)−1(I+tA)−1(I+t2AB)=(I+tA+tB+t2AB)−1(I+t2AB). (6)

Recall that, by Remark 1.1, all of this section also applies not only to problem but also problem . The notion of a monotone operator has a natural equivalence for matrices, which is, however, not widely used. Hence, we recall that a matrix is called monotone, if, for all , it holds that . Note that any symmetric positive semidefinite (spd) matrix is monotone, but a monotone matrix is not necessarily spd. Examples of a monotone matrices that are not spd are

 A=[0−110],andA=[1t01]   with |t|≤2.

The first matrix is skew symmetric, i.e.,

and any such matrix is monotone. Note that even if and are spd (as in our example in Section 1.2), the iteration map in (6) is not even symmetric. Consequently, the asymptotic convergence rate of the iteration scheme (2) is not governed by the norm of but by its spectral radius

, which is the largest magnitude of an eigenvalue of

(cf. [22, Theorem 11.2.1]

). Moreover, the eigenvalues and eigenvectors of

are complex in general.

First, it is clear from the derivation of

that the eigenspace of

for the eigenvalue exactly consists of the solutions of .

In the following, for any (the set of complex numbers) and , we denote by the ball of radius centered at . We estimate the eigenvalues of that are different from .

###### Lemma 2.1.

Let be monotone, and be defined by (6). Let be an eigenvalue of with corresponding eigenvector . Assume that and define by

 c:=Re(⟨Bz,z⟩)t−1∥z∥2+t∥Bz∥2. (7)

Then, we have and

 ∣∣∣λ−12∣∣∣≤√14−c1+2c≤12,

i.e. .

###### Proof.

Note that for a real, linear, and monotone map , and a complex vector , it holds that and thus, . This shows that .

We can see from (6) that any pair of eigenvalue and eigenvector of fulfills

 z+t2ABz=λ(z+tAz+tBz+t2ABz).

Now, if we denote , then this expression becomes

 z+t2Au=λz+λtAz+λtu+λt2Au,

 −(λ−1)z−λtu=tA(λz+(λ−1)tu).

Hence, by monotonicity of , we can derive from the above relation that

 0 ≤Re(⟨λz+(λ−1)tu,−(λ−1)z−λtu⟩) =−Re(λ(¯λ−1))∥z∥2−(|λ|2+|λ−1|2)tRe(⟨u,z⟩)−Re((λ−1)¯λ)t2∥u∥2.

 (|λ|2+|λ−1|2)Re(⟨u,z⟩)≤Re(λ−|λ|2)t∥z∥2+Re((¯λ−|λ|2))t∥u∥2.

Denoting , the last expression reads as

 (x2+(x−1)2+2y2)Re(⟨u,z⟩)≤(x−x2−y2)(∥z∥2t+t∥u∥2).

Recalling the definition of in (7), we get

 (x2+(x−1)2+2y2)c≤x−x2−y2.

This is equivalent to

 0≤x−x2−y2−cx2−c(x−1)2−2cy2=(1+2c)(x−x2−y2)−c,

which, in turn, is equivalent to . Adding to both sides, it leads to , which shows the desired estimate. ∎

In general, the eigenvalues of depend on in a complicated way. For , we have and hence, all eigenvalues are equal to one. For growing , some eigenvalues move into the interior of the circle and for , it seems that all eigenvalues tend to converge to the boundary of such a circle, see Figure 2 for an illustration of eigenvalue distribution.

###### Remark 2.1.

It appears that Lemma 2.1 is related to Proposition 4.10 of [3] and also to the fact that the iteration mapping is (in the general nonlinear case) known to be not only non-expansive, but firmly non-expansive (cf. [16, Lemma 1] and [16, Figure 1]). In general, firmly non-expansiveness allows over-relaxation of the method, and indeed, one can also easily see this in the linear case as well: If is an eigenvalue of , then it lies in (when it is not equal to one) and the corresponding eigenvalue of the relaxed iteration map

 Hρt=(1−ρ)I+ρHt

is and lies in . Therefore, for all eigenvalues different from one of the relaxed iteration

 un+1=(1−ρ)un+ρHtun

lie in a circle of radius centered at , and hence, the iteration is still non-expansive. It is know that relaxation can speed up convergence, but we will not investigate this in this paper.

Lemma 2.1 tells us a little more than that all eigenvalues of the iteration map lie in a circle centered at of radius . Especially, all eigenvalues except for have magnitude strictly smaller than one if for all corresponding eigenvectors . This implies that the iteration map is indeed asymptotically contracting outside the set of solutions of (1). This proves that the stationary iteration converges to a zero point of the map at a linear rate. Note that this does not imply the convergence in the non-stationary case.

To optimize the convergence speed, we aim at minimizing the spectral radius of , which is the magnitude of the largest eigenvalue of and there seems to be little hope to explicitly minimize this quantity.

Here is a heuristic argument based on Lemma 2.1, which we will use to derive an adaptive stepsize rule: Note that is increasing and hence, to minimize the upper bound on (more precisely: the distance of to ) we want to make from (7) as large as possible. This is achieved by minimizing the denominator of over which happens for

 t=∥z∥∥Bz∥.

This gives and note that (which implies ). This motivates an adaptive choice for the stepsize as

 tn:=∥un∥∥Bun∥, (8)

in the Douglas-Rachford iteration scheme (2).

###### Remark 2.2.

One can use the above derivation to deduce that is a good constant step-size. In fact, this is also the stepsize that gives the best linear rate derived in [29, Proposition 4], which is minimized when where is the Lipschitz constant of . However, this choice does not perform well in practice in our experiments.

Since little is known about the non-stationary Douglas-Rachford iteration in general (besides the result from [27] on convergent stepsizes with summable errors), we turn to an investigation of this method in Section 3. Before we do so, we generalize the heuristic stepsize to the case of multivalued .

### 2.2 The construction of adaptive stepsize for non-single-valued B

In the case of multi-valued , one needs to apply the iteration (4) instead of (2). To motivate an adaptive choice for the stepsize in this case, we again consider situation of linear operators.

In the linear case, the iteration (4) is given by the iteration matrix

 Ft=JtA(2JtB−I)−JtB+I.

Comparing this with the iteration map from (6) (corresponding to (2)) one notes that

 Ft=(I+tB)Ht(I+tB)−1,

i.e., the matrices and are similar and hence, have the same eigenvalues. Moreover, if is an eigenvector of with the eigenvalue , then is an eigenvector of for the same eigenvalue .

However, in the case of the iteration (4) we do not assume that is single-valued, and thus, the adaptive stepsize using the quotient cannot be used. However, again due to (3), we can rewrite this quotient without applying and get, with , that

 ∥u∥∥Bu∥=∥JtBy∥∥1t(y−JtBy)∥=t∥JtBy∥∥y−JtBy∥. (9)

Note that the two iteration schemes (2) and (4) are not equivalent in the non-stationary and non-linear case. Indeed, let us consider such that . By induction, we have . Substituting into (2), we obtain

 yn+1=JtnA(un−tnBun)+tnBun. (10)

From (3) we have

 Bun=BJtn−1Byn=1tn−1(yn−Jtn−1Byn).

Substituting and into (10), we obtain

 yn+1 =JtnA(Jtn−1Byn−tntn−1(yn−Jtn−1Byn))+tntn−1(yn−Jtn−1Byn) =tntn−1yn+JtnA((1+tntn−1)Jtn−1Byn−tntn−1yn)−tntn−1Jtn−1Byn.

Updating by (9) would then give

 tn:=κntn−1,    where   κn:=∥Jtn−1Byn∥∥yn−Jtn−1Byn∥.

In summary, we can write an alternative DR scheme for solving (1) as

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩un:=Jtn−1Byn,κn:=∥un∥∥un−yn∥,tn:=κntn−1,vn:=JtnA((1+κn)un−κnyn),yn+1:=vn+κn(yn−un). (11)

This scheme essentially has the same per-iteration complexity as in the standard DR method since the computation of does not significantly increase the cost.

Note that the non-stationary scheme (11) is notably different from the non-stationary scheme derived directly from (4) (which has been analyzed in [27]). To the best of our knowledge, the scheme (11) is new.

## 3 Convergence of the non-stationary DR method

In this section, we prove weak convergence of the new non-stationary scheme (11). We follow the approach by [38, 39] and restate the DR iteration as follows: Given such that and a sequence , at each iteration , we iterate

 {an∈A(vn),vn+tnan=un−1−tnbn−1bn∈B(un),un+tnbn=vn+tnbn−1. (12)

Note that, in the case of single-valued , this iteration reduces to

 un=JtnB(JtnA(un−1−tnBun−1)+tnBun−1),

and this scheme can, as shown in Section 2.2, be transformed into the non-stationary iteration scheme (11).

Below are some consequences which we will need in our analysis:

 un−1−un=tn(an+bn). (13) tn(bn−1−bn)=un−vn. (14) un−vn+tn(an+bn)=tn(an+bn−1)=un−1−vn. (15)

Before proving our convergence result, we state the following lemma.

###### Lemma 3.1.

Let , , and be three nonnegative sequences, and be a bounded sequence such that for :

 0<τ––≤τn≤¯τ,  |τn−τn+1|≤ωn,  and ∞∑n=0ωn<∞.

If , then and are bounded.

###### Proof.

If , then

 αn−1+τnβn−1≥αn+τnβn≥αn+τn+1βn.

If , then and

 αn−1+τnβn−1≥αn+τnβn≥τnτn+1αn+τnβn=τnτn+1(αn+τn+1βn).

By the assumption that and, without loss of generality, we assume that the latter term is positive (which is fulfilled for large enough, because ). Thus, in both cases, we can show that

 αn−1+τnβn−1≥(1−ωnτ––)(αn+τn+1βn).

Recursively, we get

 α0+τ1β0≥n∏l=1(1−ωlτ––)(αn+τn+1βn).

Under the assumption , we have for some and all . Then, we have . This shows that is bounded. Since , , and are all nonnegative, it implies that and are bounded. ∎

###### Theorem 3.1 (Convergence of non-stationary DR).

Let and be maximally monotone and be a positive sequence such that

 0

where are given. Then, the sequence generated by the iteration scheme (12) weakly converges to some in the extended solution set of (1), so in particular, .

###### Proof.

The proof of this theorem follows the proof of [39, Theorem 1]. First, we observe that, for any , we have

 ⟨un−1−un,un−u⟩ =tn⟨an+bn,un−u⟩ by (13) ≥tn⟨an+b,un−u⟩ A is monotone ≥tn⟨an+b,un−vn⟩. B is monotone

From this and (14) it follows that

 ≥tn⟨an+b,un−vn⟩ +tn⟨un−vn,bn−b⟩ =tn⟨un−vn,an+bn⟩.

Moreover, by (13) and (14) it holds that

 ∥un−1−un∥2+t2n∥bn−1−bn∥2=t2n∥an+bn∥2+∥un−vn∥2,

and thus

 ∥un−1−u∥2 +t2n∥bn−1−b∥2=∥un−1−un+un−u∥2+t2n∥bn−1−bn+bn−b∥2 +t2n[∥bn−1−bn∥2+2⟨bn−1−bn,bn−b⟩+∥bn−b∥2] +∥un−u∥2+t2n∥bn−b∥2 =∥un−u∥2+t2n∥bn−b∥2+∥un−vn+tn(bn+an)∥2. (16)

We see from (16) that

 ∥un−1−u∥2+t2n∥bn−1−b∥2≥∥un−u∥2+t2n∥bn−b∥2,

and using Lemma 3.1 with , and , we can conclude that both sequences and are bounded.

Again from (16) we can deduce using (15) that

 ∥un−1−u∥2+t2n∥bn−1−b∥2≥∥un−u∥2+t2n∥bn−b∥2+∥un−1−vn∥2=∥un−u∥2+t2n∥bn−b∥2+t2n∥an+bn−1∥2. (17)

The first line gives

 ∥un−1−u∥2+t2n∥bn−1−b∥2 ≥∥un−u∥2+t2n+1∥bn−b∥2+∥un−1−vn∥2 +(t2n−t2n+1)∥bn−b∥2.

Summing this inequality from to , we get

 N∑n=1∥un−1−vn∥2 ≤∥u0−u∥2+t21∥b0−b∥2−(∥uN−u∥2+t2N+1∥bN−b∥2) +N∑n=1(t2n+1−t2n)∥bn−b∥2.

Now, since is bounded and it holds that

 ∞∑n=1|t2n−t2n+1|=∞∑n=1|tn−tn+1||tn+tn+1|≤2¯t∞∑n=1|tn−tn+1|<∞

by our assumption, we can conclude that

 ∞∑n=1∥un−1−vn∥2<∞,

i.e., by (15), we have

 limn→∞un−1−vn=limn→∞an+bn−1=0.

This expression shows that and are also bounded. Due to the boundedness of , we conclude the existence of weak convergence subsequences and such that

 unl⇀u∗,bnl⇀b∗,

and by the above limits, we also have

 vnl+1⇀u∗,anl+1⇀b∗.

From [1, Corollary 3] it follows that . This shows that has a weak cluster point and that all such points are in . Now we deduce from (17) that

Since is bounded and , this shows that the sequence is quasi-Fejer convergent to the extended solution set with respect to the distance . Thus, similar to the proof of [39, Theorem 1], we conclude that the whole sequence weakly converges to an element of . ∎

## 4 An adaptive step-size for DR methods

The step-size suggested by (8) or by (9) is derived from our analysis of a linear case and it does not guarantee the convergence in general. In this section, we suggest modifying this step-size so that we can prove the convergence of the DR scheme. We build our adaptive step-size based on two insights:

• The estimates of the eigenvalues of the DR-iteration in the linear case from Section 2.1 motivated the adaptive stepsize

 tn=∥un∥∥Bun∥ (18)

for single-valued is single-valued and for the general case, we consider

 tn=∥Jtn−1Byn−1∥∥yn−1−Jtn−1Byn−1∥tn−1 (19)

from Section 2.2.

• Theorem 3.1 ensures the convergence of the non-stationary DR-iteration as soon as the stepsize sequence is convergent with summable increments.

However, the sequences (18) and (19) are not guaranteed to converge (and numerical experiments indicate that, indeed, divergence may occur). Here is a way to adapt the sequence (18) to produce a suitable stepsize sequence in the single-valued case:

1. Choose safeguards , a summable “conservation sequence” with and start with .

2. Let be the projection onto a box . We construct as

 tn=(1−ωn)tn−1+ωnproj[tmin,tmax](∥un∥∥Bun∥). (20)

The following lemma ensures that this will lead to a convergent sequence .

###### Lemma 4.1.

Let be a bounded sequence, i.e., , and such that and . Then, the sequence defined by and

 βn=(1−ωn)βn−1+ωnαn,

is in and converges to some and it holds that .

###### Proof.

Obviously, and since is a convex combination of and , one can easily see that obeys the same bounds as <