Sublabel-Accurate Relaxation of Nonconvex Energies

12/04/2015 ∙ by Thomas Möllenhoff, et al. ∙ 0

We propose a novel spatially continuous framework for convex relaxations based on functional lifting. Our method can be interpreted as a sublabel-accurate solution to multilabel problems. We show that previously proposed functional lifting methods optimize an energy which is linear between two labels and hence require (often infinitely) many labels for a faithful approximation. In contrast, the proposed formulation is based on a piecewise convex approximation and therefore needs far fewer labels. In comparison to recent MRF-based approaches, our method is formulated in a spatially continuous setting and shows less grid bias. Moreover, in a local sense, our formulation is the tightest possible convex relaxation. It is easy to implement and allows an efficient primal-dual optimization on GPUs. We show the effectiveness of our approach on several computer vision problems.

READ FULL TEXT VIEW PDF

Authors

page 5

page 6

page 7

page 8

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Energy minimization methods have become the central paradigm for solving practical problems in computer vision. The energy functional can often be written as the sum of a data fidelity and a regularization term. One of the most popular regularizers is the total variation () due to its many favorable properties [4]. Hence, an important class of optimization problems is given as

(1)

defined for functions with finite total variation, arbitrary, possibly nonconvex dataterms , label spaces which are closed intervals in , , and . The multilabel interpretation of the dataterm is that represents the costs of assigning label to point . For (weakly) differentiable functions equals the integral over the norm of the derivative, and therefore favors a spatially coherent label configuration. The difficultly of minimizing the nonconvex energy (1) has motivated researchers to develop convex reformulations.

(a)
(b)
Figure 3: We propose a convex relaxation for the variational model (1), which opposed to existing functional lifting methods [17, 18] allows continuous label spaces even after discretization. Our method (here applied to stereo matching) avoids label space discretization artifacts, while saving on memory and runtime.

Convex representations of (1) and more general related energies have been studied in the context of the calibration method for the Mumford-Shah functional [1]. Based on these works, relaxations for the piecewise constant [15] and piecewise smooth Mumford-Shah functional [16] have been proposed. Inspired by Ishikawa’s graph-theoretic globally optimal solution to discrete variants of (1), continuous analogues have been considered by Pock et al. in [17, 18]. Continuous relaxations for multilabeling problems with finite label spaces have also been studied in [11].

Interestingly, the discretization of the aforementioned continuous relaxations is very similar to the linear programming relaxations proposed for MAP inference in the Markov Random Field (MRF) community

[10, 22, 24, 26]. Both approaches ultimately discretize the range into a finite set of labels. A closer analysis of these relaxations reveals, however, that they are not well-suited to represent the continuous valued range that we face in most computer vision problems such as stereo matching or optical flow. More specifically, the above relaxations are not designed to assign meaningful cost values to non-integral configurations. As a result, a large number of labels is required to achieve a faithful approximation. Solving real-world vision problems therefore entails large optimization problems with high memory and runtime requirement. To address this problem, Zach and Kohli [27], Zach [25] and Fix and Agarwal [7] introduced MRF-based approaches which retain continuous label spaces after discretization. For manifold-valued labels, this issue was addressed by Lellmann et al. [12], however with the sole focus on the regularizer.

1.1 Contributions

We propose the first sublabel–accurate convex relaxation of nonconvex problems in a spatially continuous setting. It exhibits several favorable properties:

  • In contrast to existing spatially continuous lifting approaches [17, 18], the proposed method provides substantially better solutions with far fewer labels – see Fig. 3. This provides savings in runtime and memory.

  • In Sec. 3 we show that the functional lifting methods [17, 18] are a special case of the proposed framework.

  • In Sec. 3 we show that, in a local sense, our formulation is the tightest convex relaxation which takes dataterm and regularizer into account separately. It is unknown whether this “local convex envelope” property also holds for the discrete approach [27].

  • Our formulation is compact and requires only half the amount of variables for the dataterm than the formulation in [27]. We prove that the sublabel–accurate total variation can be represented in a very simple way, introducing no overhead compared to [17, 18]. In contrast, the regularizer in [27] is much more involved.

  • Since our method is derived in a spatially continuous setting, the proposed approach easily allows different gradient discretizations. In contrast to [25, 27] the regularizer is isotropic leading to noticeably less grid bias.

2 Notation and Mathematical Preliminaries

We make heavy use of the convex conjugate, which is given as for functions . The biconjugate denotes its convex envelope, i.e. the largest lower-semicontinuous convex under-approximation of . For a set we denote by the function which maps any element from to and is otherwise. For a comprehensive introduction to convex analysis, we refer the reader to [19]

. Vector valued functions

are written in bold symbols. If it is clear from the context, we will drop the inside the functions, e.g., we write for .

3 Functional Lifting

Figure 4: Lifted representation. Instead of optimizing over the function , we optimize over all possible graph functions (in this example ). The central idea behind our approach is the finite dimensional representation of at every point .

(a)

(b)
Figure 7: We show the nonconvex energy at a fixed point (red dashed line in both plots) from the stereo matching experiment in Fig. 55 over the full range of disparities. The black dots indicate the positions of the labels and the black curves show the approximations used by the respective methods. Fig. (a)a: The baseline lifting method [17] uses a piecewise linear approximation with labels as nodes. Fig. (b)b: The proposed method uses an optimal piecewise convex approximation. As we can see, the piecewise convex approximation is closer to the original nonconvex energy and therefore more accurate.

To derive a convex representation of (1), we rely on the framework of functional lifting. The idea is to reformulate the optimization problem in a higher dimensional space, in which the convex envelope approximates the nonconvex energy better than the one of the original low dimensional energy. We start by sampling the range at labels . This partitions the range into intervals so that . Clearly, any value in the range of can be written as

(2)

for and some label index . We represent such a value in the range by a -dimensional vector

(3)

where denotes a vector starting with ones followed by zeros. We call the lifted representation of , representing the graph of . This notation is depicted in Fig. 4 for . Back-projecting the lifted to the range of using the layer cake formula yields a one-to-one correspondence between and via

(4)

We now formulate problem (1) in terms of such graph functions, a technique that is common in the theory of Cartesian currents [8].

3.1 Convexification of the Dataterm

For now, we consider a fixed . Then the dataterm from (1) is a possibly nonconvex real-valued function (cf. Fig. 7) that we seek to minimize over a compact interval :

(5)

Due to the one-to-one correspondence between and it is clear that solving problem (5) is equivalent to finding a minimizer of the lifted energy:

(6)
(7)

Note that the constraint in (7) is essentially the nonconvex special ordered set of type 2 (SOS2) constraint [3]. More precisely, we demand that the derivative is zero, except for two neighboring elements, which add up to one. In the following proposition, we derive the tightest convex relaxation of .

Proposition 1.

The convex envelope of (6) is given as:

(8)

where the conjugate of the individual is

(9)

with and .

Proof.

See appendix. ∎

The above proposition reveals that the convex relaxation implicitly convexifies the dataterm on each interval . The equality implies that starting with yields exactly the same convex relaxation as starting with .

Corollary 1.

If is linear on each , then the convex envelopes of and coincide, where the latter is:

(10)
Proof.

Consider an additional constraint for each , which corresponds to selecting in (7). The fact that our relaxation is independent of whether we choose or , along with the fact that the convex hull of two points is a line, yields the assertion. ∎

For the piecewise linear case, it is possible to find an explicit form of the biconjugate.

Proposition 2.

Let us denote by the vector with

(11)

Under the assumptions of Prop. 1, one obtains:

(12)
Proof.

See appendix. ∎

Up to an offset (which is irrelevant for the optimization), one can see that (12) coincides with the dataterm of [15], the discretizations of [17, 18], and – after a change of variable – with [11]. This not only proves that the latter is optimizing a convex envelope, but also shows that our method naturally generalizes the work from piecewise linear to arbitrary piecewise convex energies. Fig. (a)a and Fig. (b)b illustrate the difference of and on the example of a nonconvex stereo matching cost.

Because our method allows arbitrary convex functions on each , we can prove that, for the two label case, our approach optimizes the convex envelope of the dataterm.

Proposition 3.

In the case of binary labeling, i.e., , the convex envelope of (6) reduces to

(13)
Proof.

See appendix. ∎

3.2 A Lifted Representation of the Total Variation

We now want to find a lifted convex formulation that emulates the total variation regularization in (1). We follow [5] and define an appropriate integrand of the functional

(14)

where the distributional derivative is a finite -valued Radon measure [2]. We define

(15)

The individual are given by:

(16)

for some and . The intuition is that penalizes a jump from to in the direction of . Since is nonconvex we compute the convex envelope.

Proposition 4.

The convex envelope of (15) is

(17)

where is given as:

(18)
Proof.

See appendix. ∎

(a)
(b)
Figure 10: Illustration of the epigraph projection. In the left subfigure the projection onto the epigraph of the conjugate of a convex quadratic is shown. In the right subfigure the piecewise linear case is illustrated. In the both cases all points that lie in the gray sets are orthogonally projected onto the respective linear parts whereas the points that lie in the green sets are projected onto the parabolic part (in the quadratic case) respectively the kinks (in the piecewise linear case). In the piecewise linear case the green sets are normal cones. The red dashed lines correspond to the boundary cases. , , , are the slopes of the segments of respectively the (sub-)label positions of .

The set from Eq. (36) involves infinitely many constraints which makes numerical optimization difficult. As the following proposition reveals, the infinite number of constraints can be reduced to only linearly many, allowing to enforce the constraint exactly.

Proposition 5.

In case the labels are ordered, i.e., , then the constraint set from Eq. (36) is equal to

(19)
Proof.

See appendix. ∎

This shows that the proposed regularizer coincides with the total variation from [5], where it has been derived based on (16) for and restricted to . Prop. 5 together with Prop. 3 show that for our formulation amounts to unlifted optimization with a convexified dataterm.

4 Numerical Optimization

(a)
(b)
(c)
(d)
(e)
(f)
Figure 17: Denoising comparison. We compare the proposed method to the baseline method [17] on the convex ROF problem. We show the time in seconds required for each method to produce a solution within a certain energy gap to the optimal solution. As the baseline method optimizes a piecewise linear approximation of the quadratic dataterm, it fails to reach that optimality gap even for (indicated by ). In contrast, while the proposed lifting method can solve a large class of non-convex problems, it is almost as efficient as direct methods on convex problems.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 26: Denoising using a robust truncated quadratic dataterm. The top row shows the input image along with the result obtained by our approach for a varying number of labels . The bottom row illustrates the results obtained by the baseline method [17]. The energy of the final solution as well as the total runtime are given below each image.

Discretizing as a -dimensional Cartesian grid, the relaxed energy minimization problem becomes

(20)

where denotes a forward-difference operator with . We rewrite the dataterm given in equation (8) by replacing the pointwise maximum over the conjugates with a maximum over a real number and obtain the following saddle point formulation of problem (20):

(21)
(22)

We numerically compute a minimizer of problem (21) using a first-order primal-dual method [6, 16] with diagonal preconditioning [14] and adaptive steps [9]. It basically alternates between a gradient descent step in the primal variable and a gradient ascent step in the dual variable. Subsequently the dual variables are orthogonally projected onto the sets respectively . The projection onto the set is a simple -ball projection. To simplify the projection onto , we transform the -dimensional epigraph constraints in (22) into -dimensional scaled epigraph constraints by introducing an additional variable with:

(23)

Using equation (9) we can now rewrite the constraints in (22) as

(24)

We implement the newly introduced equality constraints (23) introducing a Lagrange multiplier . It remains to discuss the orthogonal projections onto the epigraphs of the conjugates . Currently we support quadratic and piecewise linear convex pieces . For the piecewise linear case, the conjugate is a piecewise linear function with domain . The slopes correspond to the -positions of the sublabels and the intercepts correspond to the function values at the sublabel positions.

The conjugates as well as the epigraph projections of both, a quadratic and a piecewise linear piece are depicted in Fig. 10. For the quadratic case, the projection onto the epigraph of a parabola is computed using [23, Appendix B.2].

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Figure 39: Comparison to the MRF approach presented in [27]. The first row shows DC-Linear, second row DC-MRF and third row our results for , , and convex pieces on the truncated quadratic energy (26). Below the figures we show the final nonconvex energy. We achieve competitive results while using a more compact representation and generalizing to isotropic regularizers.

5 Experiments

We implemented the primal-dual algorithm in CUDA to run on GPUs. For , our implementation of the functional lifting framework [17], which will serve as a baseline method, requires optimization variables, while the proposed method requires variables, where is the number of points used to discretize the domain . As we will show, our method requires much fewer labels to yield comparable results, thus, leading to an improvement in accuracy, memory usage, and speed.

(a)
(b)
Figure 42: We compare the proposed relaxation with anistropic regularizer to isotropic regularization on the stereo matching example. Using an anisotropic formulation as in [27] leads to grid bias.
(a) One of the input images
(b) Proposed ()
(c) Proposed ()
(d) Proposed ()
(e) Proposed ()
(f) Proposed ()
(g) Baseline ()
(h) Baseline ()
(i) Baseline ()
(j) Baseline ()
(k) Baseline ()
(l) Baseline ()
Figure 55: Stereo comparison. We compare the proposed method to the baseline method on the example of stereo matching. The first column shows one of the two input images and below the baseline method with the full number of labels. The proposed relaxation requires much fewer labels to reach a smooth depth map. Even for , the label space discretization of the baseline method is strongly visible, while the proposed method yields a smooth result already for .

5.1 Rudin-Osher-Fatemi Model

As a proof of concept, we first evaluate the novel relaxation on the well-known Rudin-Osher-Fatemi (ROF) model [20]. It corresponds to (1) with the following dataterm:

(25)

where denotes the input data. While there is no practical use in applying convex relaxation methods to an already convex problem such as the ROF model, the purpose of this is two-fold. Firstly, it allows us to measure the overhead introduced by our method by comparing it to standard convex optimization methods which do not rely on functional lifting. Secondly, we can experimentally verify that the relaxation is tight for a convex dataterm.

In Fig. 17 we solve (25) directly using the primal-dual algorithm [9], using the baseline functional lifting method [17] and using our proposed algorithm. First, the globally optimal energy was computed using the direct method with a very high number of iterations. Then we measure how long each method took to reach this global optimum to a fixed tolerance.

The baseline method fails to reach the global optimum even for labels. While the lifting framework introduces a certain overhead, the proposed method finds the same globally optimal energy as the direct unlifted optimization approach and generalizes to nonconvex energies.

5.2 Robust Truncated Quadratic Dataterm

The quadratic dataterm in (25

) is often not well suited for real-world data as it comes from a pure Gaussian noise assumption and does not model outliers. We now consider a robust truncated quadratic dataterm:

(26)

To implement (26), we use a piecewise polynomial approximation of the dataterm. In Fig. 26 we degraded the input image with additive Gaussian and salt and pepper noise. The parameters in (26) were chosen as , and . The proposed method requires significantly less labels to find lower energies than the baseline method.

5.3 Comparison to the Method of Zach and Kohli

We remark that Prop. 4 and Prop. 5 hold for arbitrary convex one-homogeneous functionals instead of in equation (16). In particular, they hold for the anisotropic total variation . This generalization allows us to directly compare our convex relaxation to the MRF approach of Zach and Kohli [27]. In Fig. 39 we show the results of optimizing the two models entitled “DC-Linear” and “DC-MRF” proposed in [27], and of our proposed method with anisotropic regularization on the robust truncated denoising energy (26). We picked the parameters as , , and . The label space is also chosen as as described in [27].

(a) One of the input images
(b) Proposed ()
(c) Proposed ()
(d) Proposed ()
(e) Proposed ()
(f) Proposed ()
(g) Baseline ()
(h) Baseline ()
(i) Baseline ()
(j) Baseline ()
(k) Baseline ()
(l) Baseline ()
Figure 68: Depth from focus comparison. We compare our method to the baseline approach on the problem of depth from focus. First column: one of the 374 differently focused input images and the baseline method for full number of labels. Following columns: proposed relaxation (top row) vs. baseline (bottom row) for 2, 4, 8, 16 and 32 labels each.

Overall, all the energies are better than the ones reported in [27]. It can be seen from Fig. 39 that the proposed relaxation is competitive to the one proposed in [27]. In addition, The proposed relaxation uses a more compact representation and extends to isotropic regularizers. To illustrate the advantages of isotropic regularizations, Fig. (a)a and Fig. (b)b show a comparison of our proposed method for isotropic and anisotropic regularization in the next section.

5.4 Stereo Matching

Given a pair of rectified images, the task of finding a correspondence between the two images can be formulated as an optimization problem over a scalar field where each point denotes the displacement along the epipolar line associated with each . The overall cost functional fits Eq. (1). In our experiments, we computed for disparities on the Middlebury stereo benchmark [21] in a patch using a truncated sum of absolute gradient differences. For the dataterm of the proposed relaxation, we convexify the matching cost in each range by numerically computing the convex envelope using the gift wrapping algorithm.

The first row in Fig. 55 shows the result of the proposed relaxation using the convexified energy between two labels. The second row shows the baseline approach using the same amount of labels. Even for , the proposed method produces a reasonable depth map while the baseline approach basically corresponds to a two region segmentation.

5.5 Phase Unwrapping

Many sensors such as time-of-flight cameras or interferometric synthetic aperture radar (SAR) yield cyclic data lying on the circle . Here we consider the task of total variation regularized unwrapping. As is shown on the left in Fig. 76, the dataterm is a nonconvex function where each minimum corresponds to a phase shift by :

(27)

For the experiments, we approximated the nonconvex energy by quadratic pieces as depicted in Fig. 76. The label space is chosen as and the regularization parameter was set to . Again, it is visible in Fig. 76 that the baseline method shows label space discretization and fails to unwrap the depth map correctly if the number of labels is chosen too low. The proposed method yields a smooth unwrapped result using only labels.

5.6 Depth From Focus

In depth from focus the task is to recover the depth of a scene, given a stack of images each taken from a constant position but in a different focal setting, so that in each image only the objects of a certain depth are sharp. We achieve this by estimating the depth of a point by locally maximizing its contrast over the set of images. We compute the cost by using the modified Laplacian function

[13] as a contrast measure. Similar to the stereo experiments, we convexify the cost on each label range by computing the convex hull. The results are shown in Fig. 68. While the baseline method clearly shows the label space discretization, the proposed approach yields a smooth depth map. Since the proposed method uses a convex lower bound of the lifted energy, the regularizer has slightly more influence on the final result. This explains why the resulting depth maps in Fig. 68 and Fig. 55 look overall less noisy.

6 Conclusion

0

(a) Piecewise convex energy
(b) Input image
(c) Ground truth
(d) Baseline ()
(e) Baseline ()
(f) Baseline ()
(g) Proposed ()
Figure 76: We show the piecewise convex approximation of the phase unwrapping energy, followed by the cyclic input image and the unwrapped ground truth. With only 8 labels, the proposed method already yields a smooth reconstruction. The baseline method fails to unwrap the heightmap correctly using labels, and for and labels, the discretization is still noticable.

In this work we proposed a tight convex relaxation that can be interpreted as a sublabel–accurate formulation of classical multilabel problems. We showed that the local convex envelope involves infinitely many constraints, however we proved that it suffices to consider linearly many of those. The final formulation is a simple saddle-point problem that admits fast primal-dual optimization. Our method maintains sublabel accuracy even after discretization and for that reason outperforms existing spatially continuous methods. Interesting directions for future work include higher dimensional label spaces, manifold valued data and more general regularizers.

Appendix A Appendix

Proof of Proposition 1.

The proof follows from a direct calculation. We start with the definition of the biconjugate:

(28)

This shows the first equation inside the proposition. For the individual we again start with the definition of the convex conjugate:

(29)

Applying the substitution and consequently yields:

(30)

Proof of Proposition 2.

It is easy to see that

To compute the biconjugate, we write any input argument , and use to obtain

Instead of taking the supremum of all , we might as well take the supremum over all vectors p with . Care has to be taken of the first summand in the second term of the above formulation. We obtain

Note that for any being negative, the supremum immediately yields infinity by taking . Similarly, if yields infinity by taking all . For for all , and , we know that . Since equality can be obtained by choosing for all , we can reduce the above supremum to

where we used that the supremum over is attained at (still assuming that ). Let us now undo our change of variable. It is easy to see that , and for . The latter leads to

for . Considering the aforementioned constraints of , and , we finally find

Proof of Proposition 3.

For the special case the biconjugate from (28) is just:

(31)

Now using the first line in (30), becomes:

(32)

where we used as well as the substitution . ∎

Proof of Proposition 4.

We compute the individual conjugate as:

(33)

The inner maximum over is the conjugate of the -norm scaled by evaluated at . This yields:

(34)

For the overall biconjugate we have:

(35)

Since we have the over all conjugates, the set is given as the intersection of the sets described by the individual indicator functions :

(36)

Proof of Proposition 5.

First we rewrite (36) by expanding the matrix-vector product into sums:

(37)

Since the other cases for in (36) are equivalent to (37), it is enough to consider (37) instead of (36).

Let . We show the equivalences:

(37)

(38)

(39)

The direction “(37) (38)” follows by setting and in (37), and “(38) (39)” follows by setting in (38).

The direction “(39) (38)” can be proven by a quick calculation:

(40)

It remains to show “(38) (37)”. We start with the case :

(41)

Now let . Since it also holds that , thus it is equivalent to show (37) without the absolute value on the right hand side.

First we show that “(38) (37)” for and :

(42)

Using a similar argument we show that, using the above, “(38) (37)” for all .