
SublabelAccurate Discretization of Nonconvex FreeDiscontinuity Problems
In this work we show how sublabelaccurate multilabeling approaches can be derived by approximating a classical labelcontinuous convex relaxation of nonconvex freediscontinuity problems. This insight allows to extend these sublabelaccurate approaches from total variation to general convex and nonconvex regularizations. Furthermore, it leads to a systematic approach to the discretization of continuous convex relaxations. We study the relationship to existing discretizations and to discretecontinuous MRFs. Finally, we apply the proposed approach to obtain a sublabelaccurate and convex solution to the vectorial MumfordShah functional and show in several experiments that it leads to more precise solutions using fewer labels.
11/21/2016 ∙ by Thomas Möllenhoff, et al. ∙ 0 ∙ shareread it

SublabelAccurate Convex Relaxation of Vectorial Multilabel Energies
Convex relaxations of nonconvex multilabel problems have been demonstrated to produce superior (provably optimal or nearoptimal) solutions to a variety of classical computer vision problems. Yet, they are of limited practical use as they require a fine discretization of the label space, entailing a huge demand in memory and runtime. In this work, we propose the first sublabel accurate convex relaxation for vectorial multilabel problems. The key idea is that we approximate the dataterm of the vectorial labeling problem in a piecewise convex (rather than piecewise linear) manner. As a result we have a more faithful approximation of the original cost function that provides a meaningful interpretation for the fractional solutions of the relaxed convex problem. In numerous experiments on largedisplacement optical flow estimation and on color image denoising we demonstrate that the computed solutions have superior quality while requiring much lower memory and runtime.
04/07/2016 ∙ by Emanuel Laude, et al. ∙ 0 ∙ shareread it

SublabelAccurate Relaxation of Nonconvex Energies
We propose a novel spatially continuous framework for convex relaxations based on functional lifting. Our method can be interpreted as a sublabelaccurate solution to multilabel problems. We show that previously proposed functional lifting methods optimize an energy which is linear between two labels and hence require (often infinitely) many labels for a faithful approximation. In contrast, the proposed formulation is based on a piecewise convex approximation and therefore needs far fewer labels. In comparison to recent MRFbased approaches, our method is formulated in a spatially continuous setting and shows less grid bias. Moreover, in a local sense, our formulation is the tightest possible convex relaxation. It is easy to implement and allows an efficient primaldual optimization on GPUs. We show the effectiveness of our approach on several computer vision problems.
12/04/2015 ∙ by Thomas Möllenhoff, et al. ∙ 0 ∙ shareread it

The PrimalDual Hybrid Gradient Method for Semiconvex Splittings
This paper deals with the analysis of a recent reformulation of the primaldual hybrid gradient method [Zhu and Chan 2008, Pock, Cremers, Bischof and Chambolle 2009, Esser, Zhang and Chan 2010, Chambolle and Pock 2011], which allows to apply it to nonconvex regularizers as first proposed for truncated quadratic penalization in [Strekalovskiy and Cremers 2014]. Particularly, it investigates variational problems for which the energy to be minimized can be written as G(u) + F(Ku), where G is convex, F semiconvex, and K is a linear operator. We study the method and prove convergence in the case where the nonconvexity of F is compensated by the strong convexity of the G. The convergence proof yields an interesting requirement for the choice of algorithm parameters, which we show to not only be sufficient, but necessary. Additionally, we show boundedness of the iterates under much weaker conditions. Finally, we demonstrate effectiveness and convergence of the algorithm beyond the theoretical guarantees in several numerical experiments.
07/07/2014 ∙ by Thomas Möllenhoff, et al. ∙ 0 ∙ shareread it

Combinatorial Preconditioners for Proximal Algorithms on Graphs
We present a novel preconditioning technique for proximal optimization methods that relies on graph algorithms to construct effective preconditioners. Such combinatorial preconditioners arise from partitioning the graph into forests. We prove that certain decompositions lead to a theoretically optimal condition number. We also show how ideal decompositions can be realized using matroid partitioning and propose efficient greedy variants thereof for largescale problems. Coupled with specialized solvers for the resulting scaled proximal subproblems, the preconditioned algorithm achieves competitive performance in machine learning and vision applications.
01/16/2018 ∙ by Thomas Möllenhoff, et al. ∙ 0 ∙ shareread it

Controlling Neural Networks via Energy Dissipation
The last decade has shown a tremendous success in solving various computer vision problems with the help of deep learning techniques. Lately, many works have demonstrated that learningbased approaches with suitable network architectures even exhibit superior performance for the solution of (illposed) image reconstruction problems such as deblurring, superresolution, or medical image reconstruction. The drawback of purely learningbased methods, however, is that they cannot provide provable guarantees for the trained network to follow a given data formation process during inference. In this work we propose energy dissipating networks that iteratively compute a descent direction with respect to a given cost function or energy at the currently estimated reconstruction. Therefore, an adaptive step size rule such as a linesearch, along with a suitable number of iterations can guarantee the reconstruction to follow a given data formation model encoded in the energy to arbitrary precision, and hence control the model's behavior even during test time. We prove that under standard assumptions, descent using the direction predicted by the network converges (linearly) to the global minimum of the energy. We illustrate the effectiveness of the proposed approach in experiments on single image super resolution and computed tomography (CT) reconstruction, and further illustrate extensions to convex feasibility problems.
04/05/2019 ∙ by Michael Moeller, et al. ∙ 0 ∙ shareread it

Lifting Vectorial Variational Problems: A Natural Formulation based on Geometric Measure Theory and Discrete Exterior Calculus
Numerous tasks in imaging and vision can be formulated as variational problems over vectorvalued maps. We approach the relaxation and convexification of such vectorial variational problems via a lifting to the space of currents. To that end, we recall that functionals with polyconvex Lagrangians can be reparametrized as convex onehomogeneous functionals on the graph of the function. This leads to an equivalent shape optimization problem over oriented surfaces in the product space of domain and codomain. A convex formulation is then obtained by relaxing the search space from oriented surfaces to more general currents. We propose a discretization of the resulting infinitedimensional optimization problem using Whitney forms, which also generalizes recent "sublabelaccurate" multilabeling approaches.
05/02/2019 ∙ by Thomas Möllenhoff, et al. ∙ 0 ∙ shareread it

Flat Metric Minimization with Applications in Generative Modeling
We take the novel perspective to view data not as a probability distribution but rather as a current. Primarily studied in the field of geometric measure theory, kcurrents are continuous linear functionals acting on compactly supported smooth differential forms and can be understood as a generalized notion of oriented kdimensional manifold. By moving from distributions (which are 0currents) to kcurrents, we can explicitly orient the data by attaching a kdimensional tangent plane to each sample point. Based on the flat metric which is a fundamental distance between currents, we derive FlatGAN, a formulation in the spirit of generative adversarial networks but generalized to kcurrents. In our theoretical contribution we prove that the flat metric between a parametrized current and a reference current is Lipschitz continuous in the parameters. In experiments, we show that the proposed shift to k>0 leads to interpretable and disentangled latent representations which behave equivariantly to the specified oriented tangent planes.
05/12/2019 ∙ by Thomas Möllenhoff, et al. ∙ 0 ∙ shareread it
Thomas Möllenhoff
is this you? claim profile