# Sublabel-Accurate Relaxation of Nonconvex Energies

We propose a novel spatially continuous framework for convex relaxations based on functional lifting. Our method can be interpreted as a sublabel-accurate solution to multilabel problems. We show that previously proposed functional lifting methods optimize an energy which is linear between two labels and hence require (often infinitely) many labels for a faithful approximation. In contrast, the proposed formulation is based on a piecewise convex approximation and therefore needs far fewer labels. In comparison to recent MRF-based approaches, our method is formulated in a spatially continuous setting and shows less grid bias. Moreover, in a local sense, our formulation is the tightest possible convex relaxation. It is easy to implement and allows an efficient primal-dual optimization on GPUs. We show the effectiveness of our approach on several computer vision problems.

## Authors

• 10 publications
• 4 publications
• 21 publications
• 14 publications
• 96 publications
• ### Sublabel-Accurate Discretization of Nonconvex Free-Discontinuity Problems

In this work we show how sublabel-accurate multilabeling approaches can ...
11/21/2016 ∙ by Thomas Möllenhoff, et al. ∙ 0

• ### Geometric Multi-Model Fitting with a Convex Relaxation Algorithm

We propose a novel method to fit and segment multi-structural data via c...
06/05/2017 ∙ by Paul Amayo, et al. ∙ 0

• ### Semantic 3D Reconstruction with Finite Element Bases

We propose a novel framework for the discretisation of multi-label probl...
10/04/2017 ∙ by Audrey Richard, et al. ∙ 0

• ### Sublabel-Accurate Convex Relaxation of Vectorial Multilabel Energies

Convex relaxations of nonconvex multilabel problems have been demonstrat...
04/07/2016 ∙ by Emanuel Laude, et al. ∙ 0

• ### Lifting Vectorial Variational Problems: A Natural Formulation based on Geometric Measure Theory and Discrete Exterior Calculus

Numerous tasks in imaging and vision can be formulated as variational pr...
05/02/2019 ∙ by Thomas Möllenhoff, et al. ∙ 0

• ### A Restricted-Domain Dual Formulation for Two-Phase Image Segmentation

In two-phase image segmentation, convex relaxation has allowed global mi...
07/30/2018 ∙ by Jack Spencer, et al. ∙ 4

• ### Optimal Piecewise Linear Function Approximation for GPU-based Applications

Many computer vision and human-computer interaction applications develop...
10/10/2015 ∙ by Daniel Berjón, et al. ∙ 0

## Code Repositories

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Energy minimization methods have become the central paradigm for solving practical problems in computer vision. The energy functional can often be written as the sum of a data fidelity and a regularization term. One of the most popular regularizers is the total variation () due to its many favorable properties [4]. Hence, an important class of optimization problems is given as

 minu:Ω→Γ ∫Ωρ(x,u(x)) dx+λ TV(u), (1)

defined for functions with finite total variation, arbitrary, possibly nonconvex dataterms , label spaces which are closed intervals in , , and . The multilabel interpretation of the dataterm is that represents the costs of assigning label to point . For (weakly) differentiable functions equals the integral over the norm of the derivative, and therefore favors a spatially coherent label configuration. The difficultly of minimizing the nonconvex energy (1) has motivated researchers to develop convex reformulations.

Convex representations of (1) and more general related energies have been studied in the context of the calibration method for the Mumford-Shah functional [1]. Based on these works, relaxations for the piecewise constant [15] and piecewise smooth Mumford-Shah functional [16] have been proposed. Inspired by Ishikawa’s graph-theoretic globally optimal solution to discrete variants of (1), continuous analogues have been considered by Pock et al. in [17, 18]. Continuous relaxations for multilabeling problems with finite label spaces have also been studied in [11].

Interestingly, the discretization of the aforementioned continuous relaxations is very similar to the linear programming relaxations proposed for MAP inference in the Markov Random Field (MRF) community

[10, 22, 24, 26]. Both approaches ultimately discretize the range into a finite set of labels. A closer analysis of these relaxations reveals, however, that they are not well-suited to represent the continuous valued range that we face in most computer vision problems such as stereo matching or optical flow. More specifically, the above relaxations are not designed to assign meaningful cost values to non-integral configurations. As a result, a large number of labels is required to achieve a faithful approximation. Solving real-world vision problems therefore entails large optimization problems with high memory and runtime requirement. To address this problem, Zach and Kohli [27], Zach [25] and Fix and Agarwal [7] introduced MRF-based approaches which retain continuous label spaces after discretization. For manifold-valued labels, this issue was addressed by Lellmann et al. [12], however with the sole focus on the regularizer.

### 1.1 Contributions

We propose the first sublabel–accurate convex relaxation of nonconvex problems in a spatially continuous setting. It exhibits several favorable properties:

• In contrast to existing spatially continuous lifting approaches [17, 18], the proposed method provides substantially better solutions with far fewer labels – see Fig. 3. This provides savings in runtime and memory.

• In Sec. 3 we show that the functional lifting methods [17, 18] are a special case of the proposed framework.

• In Sec. 3 we show that, in a local sense, our formulation is the tightest convex relaxation which takes dataterm and regularizer into account separately. It is unknown whether this “local convex envelope” property also holds for the discrete approach [27].

• Our formulation is compact and requires only half the amount of variables for the dataterm than the formulation in [27]. We prove that the sublabel–accurate total variation can be represented in a very simple way, introducing no overhead compared to [17, 18]. In contrast, the regularizer in [27] is much more involved.

• Since our method is derived in a spatially continuous setting, the proposed approach easily allows different gradient discretizations. In contrast to [25, 27] the regularizer is isotropic leading to noticeably less grid bias.

## 2 Notation and Mathematical Preliminaries

We make heavy use of the convex conjugate, which is given as for functions . The biconjugate denotes its convex envelope, i.e. the largest lower-semicontinuous convex under-approximation of . For a set we denote by the function which maps any element from to and is otherwise. For a comprehensive introduction to convex analysis, we refer the reader to [19]

. Vector valued functions

are written in bold symbols. If it is clear from the context, we will drop the inside the functions, e.g., we write for .

## 3 Functional Lifting

To derive a convex representation of (1), we rely on the framework of functional lifting. The idea is to reformulate the optimization problem in a higher dimensional space, in which the convex envelope approximates the nonconvex energy better than the one of the original low dimensional energy. We start by sampling the range at labels . This partitions the range into intervals so that . Clearly, any value in the range of can be written as

 u(x)=γαi:=γi+α(γi+1−γi), (2)

for and some label index . We represent such a value in the range by a -dimensional vector

 u(x)=1αi:=α1i+(1−α)1i−1, (3)

where denotes a vector starting with ones followed by zeros. We call the lifted representation of , representing the graph of . This notation is depicted in Fig. 4 for . Back-projecting the lifted to the range of using the layer cake formula yields a one-to-one correspondence between and via

 u(x)=γ1+k∑i=1ui(x)(γi+1−γi). (4)

We now formulate problem (1) in terms of such graph functions, a technique that is common in the theory of Cartesian currents [8].

### 3.1 Convexification of the Dataterm

For now, we consider a fixed . Then the dataterm from (1) is a possibly nonconvex real-valued function (cf. Fig. 7) that we seek to minimize over a compact interval :

 minu∈Γ ρ(u). (5)

Due to the one-to-one correspondence between and it is clear that solving problem (5) is equivalent to finding a minimizer of the lifted energy:

 ρ(u)=min1≤i≤k ρi(u), (6)
 ρi(u)={ρ(γαi),if  u=1αi, α∈[0,1],∞,else. (7)

Note that the constraint in (7) is essentially the nonconvex special ordered set of type 2 (SOS2) constraint [3]. More precisely, we demand that the derivative is zero, except for two neighboring elements, which add up to one. In the following proposition, we derive the tightest convex relaxation of .

###### Proposition 1.

The convex envelope of (6) is given as:

 ρ∗∗(u)=supv∈Rk ⟨u,v⟩−max1≤i≤k ρ∗i(v), (8)

where the conjugate of the individual is

 ρ∗i(v)=ci(v)+ρ∗i(viγi+1−γi), (9)

with and .

###### Proof.

See appendix. ∎

The above proposition reveals that the convex relaxation implicitly convexifies the dataterm on each interval . The equality implies that starting with yields exactly the same convex relaxation as starting with .

###### Corollary 1.

If is linear on each , then the convex envelopes of and coincide, where the latter is:

 σ(u)={ρ(γαi),if  ∃i:u=1αi, α∈{0,1},∞,else. (10)
###### Proof.

Consider an additional constraint for each , which corresponds to selecting in (7). The fact that our relaxation is independent of whether we choose or , along with the fact that the convex hull of two points is a line, yields the assertion. ∎

For the piecewise linear case, it is possible to find an explicit form of the biconjugate.

###### Proposition 2.

Let us denote by the vector with

 ri=ρ(γi+1)−ρ(γi),  1≤i≤k. (11)

Under the assumptions of Prop. 1, one obtains:

 σ∗∗(u)={ρ(γ1)+⟨u,r⟩,if  ui≥ui+1,ui∈[0,1],∞,else. (12)
###### Proof.

See appendix. ∎

Up to an offset (which is irrelevant for the optimization), one can see that (12) coincides with the dataterm of [15], the discretizations of [17, 18], and – after a change of variable – with [11]. This not only proves that the latter is optimizing a convex envelope, but also shows that our method naturally generalizes the work from piecewise linear to arbitrary piecewise convex energies. Fig. (a)a and Fig. (b)b illustrate the difference of and on the example of a nonconvex stereo matching cost.

Because our method allows arbitrary convex functions on each , we can prove that, for the two label case, our approach optimizes the convex envelope of the dataterm.

###### Proposition 3.

In the case of binary labeling, i.e., , the convex envelope of (6) reduces to

 ρ∗∗(u)=ρ∗∗(γ1+u(γ2−γ1)), with u∈[0,1]. (13)

See appendix. ∎

### 3.2 A Lifted Representation of the Total Variation

We now want to find a lifted convex formulation that emulates the total variation regularization in (1). We follow [5] and define an appropriate integrand of the functional

 TV(u)=∫ΩΦ(x,Du), (14)

where the distributional derivative is a finite -valued Radon measure [2]. We define

 Φ(g)=min1≤i≤j≤k Φi,j(g). (15)

The individual are given by:

 Φi,j(g)={∣∣γαi−γβj∣∣⋅|ν|2, if g=(1αi−1βj) νT,∞, else, (16)

for some and . The intuition is that penalizes a jump from to in the direction of . Since is nonconvex we compute the convex envelope.

###### Proposition 4.

The convex envelope of (15) is

 Φ∗∗(g)=q∈Ksup ⟨q,g⟩, (17)

where is given as:

 K={q∈∣∣qT(1αi−1βj)∣∣2 Rk×d ∣∣ (18) ∣∣qT(1αi−1βj)∣∣2∀ 1≤i≤j≤k, ∀α,β∈[0,1]}.
###### Proof.

See appendix. ∎

The set from Eq. (36) involves infinitely many constraints which makes numerical optimization difficult. As the following proposition reveals, the infinite number of constraints can be reduced to only linearly many, allowing to enforce the constraint exactly.

###### Proposition 5.

In case the labels are ordered, i.e., , then the constraint set from Eq. (36) is equal to

 K={q∈ Rk×d ∣ |qi|2≤γi+1−γi, ∀i}. (19)
###### Proof.

See appendix. ∎

This shows that the proposed regularizer coincides with the total variation from [5], where it has been derived based on (16) for and restricted to . Prop. 5 together with Prop. 3 show that for our formulation amounts to unlifted optimization with a convexified dataterm.

## 4 Numerical Optimization

Discretizing as a -dimensional Cartesian grid, the relaxed energy minimization problem becomes

 minu:Ω→Rk∑x∈Ωρ∗∗(x,u(x))+Φ∗∗(x,∇u(x)), (20)

where denotes a forward-difference operator with . We rewrite the dataterm given in equation (8) by replacing the pointwise maximum over the conjugates with a maximum over a real number and obtain the following saddle point formulation of problem (20):

 minu:Ω→Rkmax(v,q)∈Cp:Ω→K⟨u,v⟩−∑x∈Ωq(x)+⟨p,∇u⟩, (21)
 C={ (v,q):Ω→Rk×R∣q(x)≥ρ∗i(v(x)), ∀x,∀i}. (22)

We numerically compute a minimizer of problem (21) using a first-order primal-dual method [6, 16] with diagonal preconditioning [14] and adaptive steps [9]. It basically alternates between a gradient descent step in the primal variable and a gradient ascent step in the dual variable. Subsequently the dual variables are orthogonally projected onto the sets respectively . The projection onto the set is a simple -ball projection. To simplify the projection onto , we transform the -dimensional epigraph constraints in (22) into -dimensional scaled epigraph constraints by introducing an additional variable with:

 zi(x)=[q(x)−ci(v(x))](γi+1−γi). (23)

Using equation (9) we can now rewrite the constraints in (22) as

 zi(x)γi+1−γi≥ρ∗i(vi(x)γi+1−γi). (24)

We implement the newly introduced equality constraints (23) introducing a Lagrange multiplier . It remains to discuss the orthogonal projections onto the epigraphs of the conjugates . Currently we support quadratic and piecewise linear convex pieces . For the piecewise linear case, the conjugate is a piecewise linear function with domain . The slopes correspond to the -positions of the sublabels and the intercepts correspond to the function values at the sublabel positions.

The conjugates as well as the epigraph projections of both, a quadratic and a piecewise linear piece are depicted in Fig. 10. For the quadratic case, the projection onto the epigraph of a parabola is computed using [23, Appendix B.2].

## 5 Experiments

We implemented the primal-dual algorithm in CUDA to run on GPUs. For , our implementation of the functional lifting framework [17], which will serve as a baseline method, requires optimization variables, while the proposed method requires variables, where is the number of points used to discretize the domain . As we will show, our method requires much fewer labels to yield comparable results, thus, leading to an improvement in accuracy, memory usage, and speed.

### 5.1 Rudin-Osher-Fatemi Model

As a proof of concept, we first evaluate the novel relaxation on the well-known Rudin-Osher-Fatemi (ROF) model [20]. It corresponds to (1) with the following dataterm:

 ρ(x,u(x))=(u(x)−f(x))2, (25)

where denotes the input data. While there is no practical use in applying convex relaxation methods to an already convex problem such as the ROF model, the purpose of this is two-fold. Firstly, it allows us to measure the overhead introduced by our method by comparing it to standard convex optimization methods which do not rely on functional lifting. Secondly, we can experimentally verify that the relaxation is tight for a convex dataterm.

In Fig. 17 we solve (25) directly using the primal-dual algorithm [9], using the baseline functional lifting method [17] and using our proposed algorithm. First, the globally optimal energy was computed using the direct method with a very high number of iterations. Then we measure how long each method took to reach this global optimum to a fixed tolerance.

The baseline method fails to reach the global optimum even for labels. While the lifting framework introduces a certain overhead, the proposed method finds the same globally optimal energy as the direct unlifted optimization approach and generalizes to nonconvex energies.

### 5.2 Robust Truncated Quadratic Dataterm

) is often not well suited for real-world data as it comes from a pure Gaussian noise assumption and does not model outliers. We now consider a robust truncated quadratic dataterm:

 ρ(x,u(x))=α2min{(u(x)−f(x))2,ν}. (26)

To implement (26), we use a piecewise polynomial approximation of the dataterm. In Fig. 26 we degraded the input image with additive Gaussian and salt and pepper noise. The parameters in (26) were chosen as , and . The proposed method requires significantly less labels to find lower energies than the baseline method.

### 5.3 Comparison to the Method of Zach and Kohli

We remark that Prop. 4 and Prop. 5 hold for arbitrary convex one-homogeneous functionals instead of in equation (16). In particular, they hold for the anisotropic total variation . This generalization allows us to directly compare our convex relaxation to the MRF approach of Zach and Kohli [27]. In Fig. 39 we show the results of optimizing the two models entitled “DC-Linear” and “DC-MRF” proposed in [27], and of our proposed method with anisotropic regularization on the robust truncated denoising energy (26). We picked the parameters as , , and . The label space is also chosen as as described in [27].

Overall, all the energies are better than the ones reported in [27]. It can be seen from Fig. 39 that the proposed relaxation is competitive to the one proposed in [27]. In addition, The proposed relaxation uses a more compact representation and extends to isotropic regularizers. To illustrate the advantages of isotropic regularizations, Fig. (a)a and Fig. (b)b show a comparison of our proposed method for isotropic and anisotropic regularization in the next section.

### 5.4 Stereo Matching

Given a pair of rectified images, the task of finding a correspondence between the two images can be formulated as an optimization problem over a scalar field where each point denotes the displacement along the epipolar line associated with each . The overall cost functional fits Eq. (1). In our experiments, we computed for disparities on the Middlebury stereo benchmark [21] in a patch using a truncated sum of absolute gradient differences. For the dataterm of the proposed relaxation, we convexify the matching cost in each range by numerically computing the convex envelope using the gift wrapping algorithm.

The first row in Fig. 55 shows the result of the proposed relaxation using the convexified energy between two labels. The second row shows the baseline approach using the same amount of labels. Even for , the proposed method produces a reasonable depth map while the baseline approach basically corresponds to a two region segmentation.

### 5.5 Phase Unwrapping

Many sensors such as time-of-flight cameras or interferometric synthetic aperture radar (SAR) yield cyclic data lying on the circle . Here we consider the task of total variation regularized unwrapping. As is shown on the left in Fig. 76, the dataterm is a nonconvex function where each minimum corresponds to a phase shift by :

 ρ(x,u(x))=dS1(u(x),f(x))2. (27)

For the experiments, we approximated the nonconvex energy by quadratic pieces as depicted in Fig. 76. The label space is chosen as and the regularization parameter was set to . Again, it is visible in Fig. 76 that the baseline method shows label space discretization and fails to unwrap the depth map correctly if the number of labels is chosen too low. The proposed method yields a smooth unwrapped result using only labels.

### 5.6 Depth From Focus

In depth from focus the task is to recover the depth of a scene, given a stack of images each taken from a constant position but in a different focal setting, so that in each image only the objects of a certain depth are sharp. We achieve this by estimating the depth of a point by locally maximizing its contrast over the set of images. We compute the cost by using the modified Laplacian function

[13] as a contrast measure. Similar to the stereo experiments, we convexify the cost on each label range by computing the convex hull. The results are shown in Fig. 68. While the baseline method clearly shows the label space discretization, the proposed approach yields a smooth depth map. Since the proposed method uses a convex lower bound of the lifted energy, the regularizer has slightly more influence on the final result. This explains why the resulting depth maps in Fig. 68 and Fig. 55 look overall less noisy.

## 6 Conclusion

In this work we proposed a tight convex relaxation that can be interpreted as a sublabel–accurate formulation of classical multilabel problems. We showed that the local convex envelope involves infinitely many constraints, however we proved that it suffices to consider linearly many of those. The final formulation is a simple saddle-point problem that admits fast primal-dual optimization. Our method maintains sublabel accuracy even after discretization and for that reason outperforms existing spatially continuous methods. Interesting directions for future work include higher dimensional label spaces, manifold valued data and more general regularizers.

## Appendix A Appendix

###### Proof of Proposition 1.

The proof follows from a direct calculation. We start with the definition of the biconjugate:

 ρ∗∗(u) =supv∈Rk ⟨u,v⟩−(min1≤i≤k ρi(u))∗ (28) =supv∈Rk ⟨u,v⟩−1≤i≤kmax ρ∗i(u).

This shows the first equation inside the proposition. For the individual we again start with the definition of the convex conjugate:

 ρ∗i(v) =supα∈[0,1] ⟨α1i+(1−α)1i−1,v⟩− (29) ρ(αγi+1+(1−α)γi) =supα∈[0,1] ⟨1i−1,v⟩+αvi−ρ(γαi).

Applying the substitution and consequently yields:

 ρ∗i(v)=supγαi∈Γi ⟨1i−1,v⟩+γαi−γiγi+1−γivi−ρ(γαi) (30) = ⟨1i−1,v⟩−γiγi+1−γivi+supγαi∈Γi γαiviγi+1−γi−ρ(γαi) = = :ci(v)+ρ∗i(viγi+1−γi).

###### Proof of Proposition 2.

It is easy to see that

 σ∗(v)=maxi∈{1,...,L}(i−1∑l=1vl−ρ(γi)).

To compute the biconjugate, we write any input argument , and use to obtain

 ρ∗∗(u) =supv⟨u,v⟩−maxi∈{1,...,L}(i−1∑l=1vl−ρ(γi)) =supvk∑i=1μii∑l=1vl−maxi∈{1,...,L}(i−1∑l=1vl−ρ(γi)).

Instead of taking the supremum of all , we might as well take the supremum over all vectors p with . Care has to be taken of the first summand in the second term of the above formulation. We obtain

 supvk∑i=1μii∑l=1vl−maxi∈{1,...,L}(i−1∑l=1vl−ρ(γi)), = suppk∑i=1μipi−maxi∈{2,...,L}max(pi−1−ρ(γi),−ρ(γ1)), = suppk∑i=1μipi−maxi∈{1,...,k}max(pi−ρ(γi+1),−ρ(γ1)), = k∑i=1μiρ(γi+1) +suppk∑i=1μipi−maxi∈{1,...,k}max(pi,−ρ(γ1)),

Note that for any being negative, the supremum immediately yields infinity by taking . Similarly, if yields infinity by taking all . For for all , and , we know that . Since equality can be obtained by choosing for all , we can reduce the above supremum to

 supz(zk∑i=1μi−max(z,−ρ(γ1)))=(1−k∑i=1μi)ρ(γ1),

where we used that the supremum over is attained at (still assuming that ). Let us now undo our change of variable. It is easy to see that , and for . The latter leads to

 k∑i=1μiρ(γi+1)+(1−k∑i=1μi)ρ(γ1) =ρ(γk+1)uk+k−1∑i=1(ui−ui+1)ρ(γi+1)+(1−u1)ρ(γ1) =ρ(γ1)+⟨u,r⟩,

for . Considering the aforementioned constraints of , and , we finally find

 ρ∗∗(u)={ρ(γ1)+⟨u,r⟩if  1≥u1≥...≥uk≥0,∞,else.

###### Proof of Proposition 3.

For the special case the biconjugate from (28) is just:

 ρ∗∗(u)=supv∈R uv−ρ∗1(v)=ρ∗∗1(u). (31)

Now using the first line in (30), becomes:

 ρ∗∗1(u) =supv∈R uv−supγ∈Γ γ−γ1γ2−γ1v−ρ(γ) (32) =supv∈R v(u+γ1γ2−γ1)−supγ∈Γ γvγ2−γ1−ρ(γ) =supv∈R v(u+γ1γ2−γ1)−ρ∗(vγ2−γ1) =sup~v∈R ~v(γ1+u(γ2−γ1))−ρ∗(~v) =ρ∗∗(γ1+u(γ2−γ1)),

where we used as well as the substitution . ∎

###### Proof of Proposition 4.

We compute the individual conjugate as:

 Φ∗i,j(q)=supg∈Rd×k⟨g,q⟩−Φi,j(q) (33) =supα,β∈[0,1] supν∈Rd ⟨q,(1αi−1βj)νT⟩−∣∣γαi−γβj∣∣|ν|2 =supα,β∈[0,1] supν∈Rd ⟨qT(1αi−1βj),ν⟩−∣∣γαi−γβj∣∣|ν|2 =supα,β∈[0,1] supν∈Rd ⟨qT(1αi−1βj),ν⟩−∣∣γαi−γβj∣∣|ν|2.

The inner maximum over is the conjugate of the -norm scaled by evaluated at . This yields:

 Φ∗i,j(q)=⎧⎪ ⎪⎨⎪ ⎪⎩0,if  ∣∣qT(1αi−1βj)∣∣2≤∣∣γαi−γβj∣∣,∀α,β∈[0,1],∞,else. (34)

For the overall biconjugate we have:

 Φ∗∗(g) =supq∈Rk×d ⟨q,g⟩−max1≤i,j≤k Φ∗i,j(q) (35) =supq∈K ⟨q,g⟩.

Since we have the over all conjugates, the set is given as the intersection of the sets described by the individual indicator functions :

 K={q∈∣∣qT(1αi−1βj)∣∣2 Rk×d ∣ (36) ∣∣qT(1αi−1βj)∣∣2∀ 1≤i≤j≤k, ∀α,β∈[0,1]}.

###### Proof of Proposition 5.

First we rewrite (36) by expanding the matrix-vector product into sums:

 ∣∣ ∣∣i−1∑l=jql+αqi−βqj∣∣ ∣∣2≤∣∣γαi−γβj∣∣, (37) ∀ 1≤j≤i≤k, ∀α,β∈[0,1].

Since the other cases for in (36) are equivalent to (37), it is enough to consider (37) instead of (36).

Let . We show the equivalences:

(37)

 ∣∣ ∣∣i∑l=jql∣∣ ∣∣2≤γi+1−γj, ∀ 1≤j≤i≤k. (38)

 |qi|2≤γi+1−γi, ∀ 1≤i≤k. (39)

The direction “(37) (38)” follows by setting and in (37), and “(38) (39)” follows by setting in (38).

The direction “(39) (38)” can be proven by a quick calculation:

 ∣∣ ∣∣i∑l=jql∣∣ ∣∣2≤i∑l=j|ql|2≤i∑l=jγl+1−γl=γi+1−γj. (40)

It remains to show “(38) (37)”. We start with the case :

 |αqi−βqi|2 =|α−β||qi|2 (41) ≤|α−β|(γi+1−γi) =|(γi+1−γi)α−(γi+1−γi)β| =|(α−β)(γi+1−γi)|=|γαi−γβi|.

Now let . Since it also holds that , thus it is equivalent to show (37) without the absolute value on the right hand side.

First we show that “(38) (37)” for and :

 ∣∣ ∣∣i−1∑l=j+1ql+αqi+(1−β)qj∣∣ ∣∣2 (42) ≤∣∣ ∣∣i−1∑l=j+1ql+(1−β)qj∣∣ ∣∣2+α|qi|2 for β=0 or β=1≤γi−γβj+α(γi+1−γi) =γα