1 Introduction
1.1 Nonconvex Vectorial Problems
In this paper, we derive a sublabelaccurate convex relaxation for vectorial optimization problems of the form
(1) 
where , and denotes a generally nonconvex pointwise dataterm. As regularization we focus on the total variation defined as:
(2) 
where is the Schatten norm on
, i.e., the largest singular value. For differentiable functions
we can integrate (2) by parts to find(3) 
where the dual norm penalizes the sum of the singular values of the Jacobian, which encourages the individual components of to jump in the same direction. This type of regularization is part of the framework of Sapiro and Ringach [19].
1.2 Related Work
Due to its nonconvexity the optimization of (1) is challenging. For the scalar case (), Ishikawa [9] proposed a pioneering technique to obtain globally optimal solutions in a spatially discrete setting, given by the minimum stcut of a graph representing the space . A continuous formulation was introduced by Pock et al. [15] exhibiting several advantages such as less grid bias and parallelizability.
In a series of papers [16, 14], connections of the above approaches were made to the mathematical theory of cartesian currents [6] and the calibration method for the MumfordShah functional [1], leading to a generalization of the convex relaxation framework [15] to more general (in particular nonconvex) regularizers.
In the following, researchers have strived to generalize the concept of functional lifting and convex relaxation to the vectorial setting (). If the dataterm and the regularizer are both separable in the label dimension, one can simply apply the above convex relaxation approach in a channelwise manner to each component separately. But when either the dataterm or the regularizer couple the label components, the situation becomes more complex [8, 20].
The approach which is most closely related to our work, and which we consider as a baseline method, is the one by Lellmann et al. [11]. They consider coupled dataterms with coupled total variation regularization of the form (2).
A drawback shared by all mentioned papers is that ultimately one has to discretize the label space. While Lellmann et al. [11] propose a sublabelaccurate regularizer, we show that their dataterm leads to solutions which still have a strong bias towards the label grid. For the scalarvalued setting, continuous label spaces have been considered in the MRF community by Zach et al. [22] and Fix et al. [5]. The paper [21] proposes a method for mixed continuous and discrete vectorial label spaces, where everything is derived in the spatially discrete MRF setting. Möllenhoff et al. [12] recently proposed a novel formulation of the scalarvalued case which retains fully continuous label spaces even after discretization. The contribution of this work is to extend [12] to vectorial label spaces, thereby complementing [11] with a sublabelaccurate dataterm.
1.3 Contribution
In this work we propose the first sublabelaccurate convex formulation of vectorial labeling problems. It generalizes the formulation for scalarvalued labeling problems [12] and thus includes important applications such as optical flow estimation or color image denoising. We show that our method, derived in a spatially continuous setting, has a variety of interesting theoretical properties as well as practical advantages over the existing labeling approaches:

We generalize existing functional lifting approaches (see Sec. 2.2).

Due to its sublabelaccuracy our method requires only a small amount of labels to produce good results which leads to a drastic reduction in memory. We believe that this is a vital step towards the realtime capability of lifting and convex relaxation methods. Moreover, our method eliminates the label bias, that previous lifting methods suffer from, even for many labels.

For convex dataterms, our method is equivalent to the unlifted problem – see Prop. 4. Therefore, it allows a seamless transition between direct optimization and convex relaxation approaches.
1.4 Notation
We write for the standard inner product on or the Frobenius product if are matrices. Similarly without any subscript denotes the usual Euclidean norm, respectively the Frobenius norm for matrices.
We denote the convex conjugate of a function by . It is an important tool for devising convex relaxations, as the biconjugate is the largest lowersemicontinuous (lsc.) convex function below . For the indicator function of a set we write , i.e., if and otherwise. stands for the unit simplex.
2 Convex Formulation
2.1 Lifted Representation
Motivated by Fig. 5, we construct an equivalent representation of (1) in a higher dimensional space, before taking the convex envelope.
Let be a compact and convex set. We partition into a set of simplices so that is a disjoint union of up to a set of measure zero. Let be the th vertex of and denote by the union of all vertices, referred to as labels, with , and . For , we refer to as a sublabel. Any sublabel can be written as a convex combination of the vertices of a simplex with for appropriate barycentric coordinates :
(4) 
By encoding the vertices using a oneof representation we can identify any
with a sparse vector
containing at least many zeros and vice versa:(5) 
The entries of the vector are zero except for the th entry, which is equal to one. We refer to as the lifted representation of . This onetoonecorrespondence between and is shown in Fig. 6. Note that both, and depend on . However, for notational convenience we drop the dependence on whenever we consider a fixed point .
2.2 Convexifying the Dataterm
Let for now the weight of the regularizer in (1) be zero. Then, at each point we minimize a generally nonconvex energy over a compact set :
(6) 
. The standard lifting correponds to a linear interpolation of the original cost in between the locations
, which are associated with the vertices in the lifted energy (lower left). The proposed method extends the cost to the relaxed set in a more precise way: The original cost is preserved on the connecting lines between adjacent (black lines on the bottom right) up to concave parts (red graphs and lower surface on the right). This information, which may influence the exact location of the minimizer, is lost in the standard formulation. If the solution of the lifted formulation is in the interior (gray area) an approximate solution to the original problem can still be obtained via Eq. (5).We set up the lifted energy so that it attains finite values if and only if the argument is a sparse representation of a sublabel :
(7) 
Problems (6) and (7) are equivalent due to the onetoone correspondence of and . However, energy (7) is finite on a nonconvex set only. In order to make optimization tractable, we minimize its convex envelope.
Proposition 1
The convex envelope of (7) is given as:
(8) 
and are given as , , where are the columns of the matrix .
Proof
Follows from a calculation starting at the definition of . See Appendix 0.A for a detailed derivation.
The geometric intuition of this construction is depicted in Fig. 11. Note that if one prescribes the value of in (7) only on the vertices of the unit simplices , i.e., if and otherwise, one obtains the linear biconjugate on the feasible set. This coincides with the standard relaxation of the dataterm used in [16, 10, 4, 11]. In that sense, our approach can be seen as a relaxing the dataterm in a more precise way, by incorporating the true value of not only on the finite set of labels , but also everywhere in between, i.e., on every sublabel.
2.3 Lifting the Vectorial Total Variation
We define the lifted vectorial total variation as
(9) 
where denotes the distributional derivative of and is positively onehomogeneous, i.e., . For such functions, the meaning of (9) can be made fully precise using the polar decomposition of the Radon measure [2, Cor. 1.29, Thm. 2.38]. However, in the following we restrict ourselves to an intuitive motivation for the derivation of for smooth functions.
Our goal is to find so that whenever corresponds to some , in the sense that whenever . In order for the equality to hold, it must in particular hold for all that are classically differentiable, i.e., , and whose Jacobian is of rank 1, i.e., for some . This rank 1 constraint enforces the different components of to have the same jump normal, which is desirable in many applications. In that case, we observe
(10) 
For the corresponding lifted representation , we have . Therefore it is natural to require in order to achieve the goal . Motivated by these observations, we define
(11) 
where , and . Since the convex envelope of (9) is intractable, we derive a “locally” tight convex underapproximation:
(12) 
Proposition 2
The convex conjugate of is
(13) 
with convex set
(14) 
and . are the columns of .
Proof
Follows from a calculation starting at the definition of the convex conjugate . See Appendix 0.A.
Interestingly, although in its original formulation (14) the set has infinitely many constraints, one can equivalently represent by finitely many.
Proposition 3
2.4 Lifting the Overall Optimization Problem
Combining dataterm and regularizer, the overall optimization problem is given
(16) 
A highly desirable property is that, opposed to any other vectorial lifting approach from the literature, our method with just one simplex applied to a convex problem yields the same solution as the unlifted problem.
Proposition 4
Proof
For the substitution into and yields the result. For a complete proof, see Appendix 0.A.
3 Numerical Optimization
3.1 Discretization
For now assume that is a dimensional Cartesian grid and let denote a finitedifference divergence operator with . Then the relaxed energy minimization problem becomes
(18) 
In order to get rid of the pointwise maximum over in Eq. (8), we introduce additional variables and additional constraints , so that attains the value of the pointwise maximum:
(19) 
where the set is given as
(20) 
For numerical optimization we use a GPUbased implementation^{1}^{1}1https://github.com/tumvision/sublabel_relax of a firstorder primaldual method [14]. The algorithm requires the orthogonal projections of the dual variables onto the sets respectively in every iteration. However, the projection onto an epigraph of dimension is difficult for large values of . We rewrite the constraints , , as dimensional epigraph constraints introducing variables , :
(21) 
These equality constraints can be implemented using Lagrange multipliers. For the projection onto the set we use an approach similar to [7, Figure 7].
3.2 Epigraphical Projections
Computing the Euclidean projection onto the epigraph of is a central part of the numerical implementation of the presented method. However, for this is nontrivial. Therefore we provide a detailed explanation of the projection methods used for different classes of . We will consider quadratic, truncated quadratic and piecewise linear .
Quadratic case:
Let be of the form . A direct projection onto the epigraph of for is difficult. However, the epigraph can be decomposed into separate epigraphs for which it is easier to project onto: For proper, convex, lsc. functions the epigraph of is the Minkowski sum of the epigraphs of and (cf. [17, Exercise 1.28, Theorem 11.23a]). This means that it suffices to compute the projections onto the epigraphs of a quadratic function and a convex, piecewise linear function by rewriting constraint (21) as
(22) 
For the projection onto the epigraph of a dimensional quadratic function we use the method described in [20, Appendix B.2]. The projection onto a piecewise linear function is described in the last paragraph of this section.
Truncated quadratic case:
Let be of the form as it is the case for the nonconvex robust ROF with a truncated quadratic dataterm in Sec. 4.2. Again, a direct projection onto the epigraph of is difficult. However, a decomposition of the epigraph into simpler epigraphs is possible as the epigraph of is the intersection of the epigraphs of and . Hence, one can separately project onto the epigraphs of and . Both of these projections can be handled using the methods from the other paragraphs.
Piecewise linear case:
In case is piecewise linear on each , i.e., attains finite values at a discrete set of sampled sublabels and interpolates linearly between them, we have that
(23) 
Again this is a convex, piecewise linear function. For the projection onto the epigraph of such a function, a quadratic program of the form
(24) 
needs to be solved. We implemented the primal activeset method described in [13, Algorithm 16.3], and found it solves the program in a few (usually ) iterations for a moderate number of constraints.
4 Experiments
4.1 Vectorial ROF Denoising
In order to validate experimentally, that our model is exact for convex dataterms, we evaluate it on the RudinOsherFatemi [18] (ROF) model with vectorial TV (2). In our model this corresponds to defining . As expected based on Prop. 4 the energy of the solution of the unlifted problem is equal to the energy of the projected solution of our method for up to machine precision, as can be seen in Fig. 15 and Fig. 21. We point out, that the sole purpose of this experiment is a proof of concept as our method introduces an overhead and convex problems can be solved via direct optimization. It can be seen in Fig. 15 and Fig. 21, that the baseline method [11] has a strong label bias.
4.2 Denoising with Truncated Quadratic Dataterm
For images degraded with both, Gaussian and saltandpepper noise we define the dataterm as . We solve the problem using the epigraph decomposition described in the second paragraph of Sec. 3.2. It can be seen, that increasing the number of labels leads to lower energies and at the same time to a reduced effect of the TV. This occurs as we always compute a piecewise convex underapproximation of the original nonconvex dataterm, that gets tighter with a growing number of labels. The baseline method [11] again produces strong discretization artefacts even for a large number of labels .
4.3 Optical Flow
We compute the optical flow between two input images . The label space is chosen according to the estimated maximum displacement between the images. The dataterm is , and is based on the norm of the image gradient .
In Fig. 43 we compare the proposed method to the product space approach [8]. Note that we implemented the product space dataterm using Lagrange multipliers, also referred to as the global approach in [8]. While this increases the memory consumption, it comes with lower computation time and guaranteed convergence. For our method, we sample the label space on sublabels and subsequently convexify the energy on each triangle using the quickhull algorithm [3]. For the product space approach we sample the label space at equidistant labels, from to . As the regularizer from the product space approach is different from the proposed one, we chose differently for each method. For the proposed method, we set and for the product space and baseline approach . We can see in Fig. 43, our method outperforms the product space approach w.r.t. the average endpoint error. Our method outperforms previous lifting approaches: In Fig. 47 we compare our method on large displacement optical flow to the baseline [11]. To obtain competitive results on the Middlebury benchmark, one would need to engineer a better dataterm.
5 Conclusions
We proposed the first sublabelaccurate convex relaxation of vectorial multilabel problems. To this end, we approximate the generally nonconvex dataterm in a piecewise convex manner as opposed to the piecewise linear approximation done in the traditional functional lifting approaches. This assures a more faithful approximation of the original cost function and provides a meaningful interpretation for the nonintegral solutions of the relaxed convex problem. In experimental validations on largedisplacement optical flow estimation and color image denoising, we show that the computed solutions have superior quality to the traditional convex relaxation methods while requiring substantially less memory and runtime.
Appendix 0.A Theory
Proof (Proof of Proposition 1)
By definition the biconjugate of is given as
(25) 
We proceed computing the conjugate of :
(26) 
We introduce the substitution and obtain
(27) 
since is invertible for being a nondegenerate triangulation and . With this we can further rewrite the conjugate as
(28) 
Proof (Proof of Proposition 2)
Define as
(29) 
Then, can be rewritten as a pointwise minimum over the individual
(30) 
We begin computing the conjugate of
(31) 
with the set being defined as
(32) 
Since the maximum over indicator functions of sets is equal to the indicator function of the intersection of the sets we obtain for
(33) 
Proof (Proof of Proposition 3)
Let s.t. for all and . For any define
(34) 
and analogously
(35) 
Let us choose an such that , . Then for all and implies that
(36) 
holds for all vectors with sufficiently small entries. Inserting the definitions of and we find that
(37) 
holds for all with sufficiently small entries. For a nondegenerate triangle, is invertible and a simple substitution yields that
(38) 
holds for all with sufficiently small entries. This means that the operator norm of induced by the norm, i.e. the norm, is bounded by one.
Let us now show the other direction. For s.t. , note that inverting the above computation immediately yields that
(39) 
holds for all , . Our goal is to show that having this inequality on each simplex is sufficient to extend it to arbitrary pairs of simplices. The overall idea of this part of the proof is illustrated in Fig. 48.
Let and with , be given. Consider the line segment
(40) 
Since the triangulated domain is convex, there exist and functions such that for , one can write for some . The continuity of implies that , i.e. these points correspond to both simplices, and . Note that this also means that . The intuition of this construction is that the are located on the boundaries of adjacent simplices on the line segment. We find
(41) 
which yields the assertion.
Proof (Proof of Proposition 4)
Let be given by affinely independent vertices . We show that our lifting approach applied to the label space solves the convexified unlifted problem, where the dataterm was replaced by its convex hull on . Let the matrices and be defined through
(42) 
The transformation maps to . Now consider the following lifted function parametrized through :
(43) 
Consider a fixed . Plugging this lifted representation into the biconjugate of the lifted dataterm yields:
Comments
There are no comments yet.