# A level set representation method for N-dimensional convex shape and applications

In this work, we present a new efficient method for convex shape representation, which is regardless of the dimension of the concerned objects, using level-set approaches. Convexity prior is very useful for object completion in computer vision. It is a very challenging task to design an efficient method for high dimensional convex objects representation. In this paper, we prove that the convexity of the considered object is equivalent to the convexity of the associated signed distance function. Then, the second order condition of convex functions is used to characterize the shape convexity equivalently. We apply this new method to two applications: object segmentation with convexity prior and convex hull problem (especially with outliers). For both applications, the involved problems can be written as a general optimization problem with three constraints. Efficient algorithm based on alternating direction method of multipliers is presented for the optimization problem. Numerical experiments are conducted to verify the effectiveness and efficiency of the proposed representation method and algorithm.

## Authors

• 4 publications
• 5 publications
• 16 publications
• 13 publications
• ### Convex Shape Representation with Binary Labels for Image Segmentation: Models and Fast Algorithms

We present a novel and effective binary representation for convex shapes...
02/22/2020 ∙ by Shousheng Luo, et al. ∙ 0

• ### Convex hull algorithms based on some variational models

Seeking the convex hull of an object is a very fundamental problem arisi...
08/09/2019 ∙ by Lingfeng Li, et al. ∙ 8

• ### Convexity Shape Prior for Level Set based Image Segmentation Method

We propose a geometric convexity shape prior preservation method for var...
05/22/2018 ∙ by Shi Yan, et al. ∙ 0

• ### How type of Convexity of the Core function affects the Csiszár f-divergence functional

We investigate how the type of Convexity of the Core function affects th...
01/08/2021 ∙ by Mohsen Kian, et al. ∙ 0

• ### Handling Hard Affine SDP Shape Constraints in RKHSs

Shape constraints, such as non-negativity, monotonicity, convexity or su...
01/05/2021 ∙ by Pierre-Cyril Aubin-Frankowski, et al. ∙ 0

• ### Hard Shape-Constrained Kernel Machines

Shape constraints (such as non-negativity, monotonicity, convexity) play...
05/26/2020 ∙ by Pierre-Cyril Aubin-Frankowski, et al. ∙ 0

• ### Numerical Approximation of Optimal Convex and Rotationally Symmetric Shapes for an Eigenvalue Problem arising in Optimal Insulation

We are interested in the optimization of convex domains under a PDE cons...
11/05/2021 ∙ by Hedwig Keller, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In the tasks of computer vision, especially image segmentation, shape priors are very useful information to improve output results when the objects of interest are partially occluded or suffered from strong noises, intensity bias and artifacts. Therefore, various shape priors are investigated in the literature [chan2001active, gorelick2014convexity, leventon2002statistical, Yuan2013]. In [chen2002using] and [leventon2002statistical], the authors combined shape priors with the snakes model [caselles1997geodesic] using a statistical approach and a variational approach, respectively. Later, based on the Chan-Vese model [chan2001active], a new variational model, which uses a labelling function to deal with the shape prior, was proposed in [cremers2003towards]. A modification of this method was presented in [chan2005level] to handle the scaling and rotation of the prior shape. All the priors used in these papers are usually learned or obtained from some given image sets specifically.

Recently, generic and abstract shape priors have attracted more and more attentions, such as connectivity [vicente2008graph], star shape [veksler2008star, yuan2012efficient], hedgehog [isack2016hedgehog] and convexity [Gorelick2017Convexity, Yuan2013]. Among them, the convexity prior is one of the most important priors. Firstly, many objects in natural and biomedical images are convex, such as balls, buildings and some organs [royer2016convexity]. Secondly, convexity also plays a very important role in many computer vision tasks, like human vision completion [Liu1999The]. Several methods for convexity prior representation and its applications were discussed in the literature [gorelick2014convexity, Yuan2013, yangA2017]. However, these methods often work for 2-dimensional convex objects only and may have relatively high computational costs. In this paper, we will present a new method for convexity shape representations. This method is not only suitable for convex objects in all dimensions but also numerically efficient for computations.

Most of the existing methods for convex shape representation can be divided into two groups: discrete approaches and continuous approaches. For the first class, there are several methods in the literature. In [strekalovskiy2011generalized], the authors first introduced a generalized ordering constraint for convex object representation. To achieve the convexity of objects, one needs to explicitly model the boundaries of objects. Later, an image segmentation model with the convexity prior was presented in [gorelick2014convexity]. This method is based on the convexity definition and the key idea is penalizing all 1-0-1 configurations on all straight lines where 1 (resp. 0) represents the associated pixel inside (resp. outside) the considered object. This method was then generalized for multiple convex objects segmentation in [gorelick2017multi]. In [royer2016convexity], the authors proposed a segmentation model which can handle multiple convex objects. They formulated the problem as a minimum cost multi-cut problem with a novel convexity constraint: If a path is inside the concerned object(s), then the line segment between the two ends should not pass through the object boundary.

The continuous methods usually characterize the shape convexity by exploiting the non-negativity of the boundary curvature in 2-dimensional space. As far as we know, the curvature non-negativity was firstly incorporated into image segmentation in [Yuan2013]. Then, a similar method was adopted for cardiac left ventricle segmentation in [yangA2017]. In [bae2017augmented]

, an Euler’s elastica energy-based model was studied for convex contours by penalizing the integral of absolute curvatures. Recently, the continuous methods or curvature-based methods were developed further in

[luo2018convex, yan2018convexity]. For a given object in 2-dimensional space, it was proved in [yan2018convexity] that the non-negative Laplacian of the associated signed distance function [sussman1994level] (SDF) is sufficient to guarantee the convexity of shapes.In [luo2018convex], the authors also proved that this condition is also necessary. In addition, instead of solving negative curvature penalizing problems like [bae2017augmented, Yuan2013, yangA2017], these two papers incorporated non-negative Laplacian condition as a constraint into the involved optimization problem. This method has also been extended for multiple convex objects segmentation in [luo2019convex]. Projection algorithm and alternating direction method of multipliers (ADMM) were presented in [yan2018convexity] and [luo2018convex].

In some real applications, one needs to preserve the convexity for high dimensional objects, such as tumors and organs in 3D medical images. Therefore, it is very significant to study efficient representation methods for high dimensional convex objects. However, there are some difficulties in theories and numerical computation for the existing methods mentioned above to be generalized to higher dimensions.

For the discrete methods in [Gorelick2017Convexity, royer2016convexity], it is almost impossible to extend these methods to higher dimensional () cases directly because of the computational complexity. One can setup the same model as the two dimensional case, but the computational cost will increase dramatically. For the continuous methods using level-set functions [luo2018convex, Yuan2013], the mean curvature (Laplacian of the SDF) of the zero level-set surface can not guarantee the convexity of the object convexity in high dimensions. For example, in the 3-dimensional space, an object is convex if and only if its two principle curvatures always have the same sign at the boundary. Accordingly, one should use the non-negativity of Gaussian curvatures to characterize the convexity of objects [elsey2009analogue]. However, the problems (e.g., optimization problem for image segmentation) with non-negative Gaussian curvature constraint or penalty are very complex and difficult to solve.

In this work, we present a new convexity prior which works in any dimension. Similar to the framework in [luo2018convex], we also adopt the level-set representations, which is a powerful tool for numerical analysis of surfaces and shapes [osher2002level, peng1999pde, zhao1996variational] and have many applications in image processing [chan1999active, vese2002multiphase]. We first prove the equivalence between the object convexity and the convexity of the associated SDF. It is well-known that a convex function must satisfy the second order conditions, i.e., having positive semi-definite Hessian matrices at all points if it is secondly differentiable. Based on this observation, we obtain a new way to characterize the shape convexity using the associated SDF. The proposed methods have several advantages. Firstly, it works regardless of the object dimensions. Secondly, this method can be easily extended to multiple convex objects representation using the idea in [luo2019convex]. Thirdly, this representation is very simple and allows us to design efficient algorithms for potential applications.

To verify the effectiveness of the proposed method, we apply it to two types of important applications in computer vision and design an efficient algorithm to solve them. The first one is the image segmentation task with convexity prior [Gorelick2017Convexity, luo2018convex, Yuan2013]

. More specifically, we combine the convexity representation with a 2-phase probability-based segmentation model. The second model is the variational convex hull problem, which was first introduced in

[li2019convex] for 2-dimensional binary images. This model can compute multiple convex hulls of separate objects simultaneously and is very robust to noises and outliers compared to traditional methods. However, since it uses the same convexity prior with [luo2018convex], it works only for 2-dimensional problems. Using the proposed method, we can generalize it to higher dimensions and maintain all of its advantages, e.g., robustness to outliers.

Both applications can be formulated as a general constrained optimization problem. Similar to [li2019convex, luo2018convex], alternating direction method of multiplier (ADMM) for the general optimization problem is derived to solve the models. In the proposed algorithm, the solutions of all sub-problems can be derived explicitly or computed efficiently. Numerical results for the two concerned problems are presented to show the effectiveness and efficiency of our methods in 2 and 3 dimensional cases.

The rest of this paper is organized as follow. In Section 2, we will introduce some basic notations and results from convex optimization. Then we will present the convexity prior representation method in Section 3. Section 4 is devoted to two variational models for the two applications. The numerical algorithm will then be given in Section 5. In Section 6, we will present some experimental results in 2 and 3 dimensional spaces to test our proposed methods. Conclusions and future works will be discussed in Section 7.

## 2 Preliminaries

First, we will briefly introduce some results from convex optimization in this section. Given a set of points in , a point in the form is called a convex combination of if and . Then, given a set in , it is said to be convex if all convex combinations of the points in also belong to . The convex hull or convex envelope of is defined as the collection of all its convex combinations, i.e.,

 Conv(C)={k∑i=1θixi|xi∈C, θi≥0 for i=1,…,k and k∑i=1θi=1}, (1)

where can be any finite positive integer. In other words, is the smallest convex set containg .

Another important concept is the convex function. Given a function , is a convex function if its epigraph is a conves set. If is twice differentiable, a necessary and sufficient condition for the convexity is that the Hessian matrix of is positive semi-definite at every , i.e.

 Hf(x)≥0, (2)

where denotes the Hessian matrix of at .

For any measurable and real number , we can define the level-set, sublevel-set and suplevel-set of as follow:

 Lα(f)={x|x∈dom(f), f(x)=α}, (3) L−α(f)={x|x∈dom(f), f(x)<α}, (4) L+α(f)={x|x∈dom(f), f(x)>α}, (5)

where denotes the domain of . It is easy to verify that if is convex, is also convex for any .

Next, we are going to introduce a powerful tool, named level-set function, for the implicit representation of shapes. Given an open set in with piecewise smooth boundary , the level-set function, denoted as , of satisfies:

 ⎧⎨⎩ϕ(x)>0,x∈Ω0,ϕ(x)=0,x∈Γ,ϕ(x)<0,x∈Ω1, (6)

where is the exterior of . We further assume that is nonempty and has measure zero in the rest of this paper. Equivalently, we can also denote as , as and as . Then, the evolution of the hypersurface can be represented by the evolution of implicitly. The main advantage of the level-set method is that it can track complicated topological changes and represent sharp corners very easily. Using the heaviside function

, we can obtain the characteristic function of

by and the characteristic function of by . The distributional derivative of the heaviside function is denoted as .

In this work, we are going to use the signed distance function, which is a special type of level-set function, to represent convex shapes. For an open set with piecewise smooth boundary , the SDF of is defined as

 ϕ(x)=⎧⎨⎩dist(x,Γ),x∈Ω0,0,x∈Γ,−dist(x,Γ),x∈Ω1, (7)

where . Since is a compact set, the infimum can be obtained in . It is well-known that the SDF is continuous and satisfies the Eikonal equation [osher2002level]:

 |∇ϕ|=1, (8)

where the gradient is defined in the weak sense. Notice that the weak solution of (8) is not unique, but one can define a unique solution in the viscosity sense [crandall1992user] given certain boundary conditions. We also obtained an interesting property for the SDF:

###### Lemma 1

Suppose we are given two compact subset and in . Denotes their boundaries by and , and their SDFs by and respectively. Then if and only if for any .

The proof of this result will be given in the next section. This result is very useful in computing the convex hull via level-set representation.

In [luo2018convex], the authors presented an equivalent condition of 2-dimensional object convexity:

 Δϕ≥0, (9)

where is the associated SDF. It can be shown that equals to the mean curvature of the level-set curve at the point , where . In the 2 dimensional space, if the mean curvatures of the zero level-set curves are non-negative, the object must be convex. However, this is not the case in higher dimensions.

## 3 Convexity representation for high dimensional shapes

In [luo2018convex], the authors proved that a 2-dimensional shape is convex if and only if any sublevel-set of its SDF is convex. Actually, one can show that the convexity of a shape is also equivalent to the convexity of its SDF and this is true in any dimensions. We summarize this result in the following theorem.

###### Theorem 1

Let be the boundary of a bounded open convex subset and be the corresponding SDF of . Then, is convex if and only if is convex.

It is well-known that the SDF must satisfy the second order condition (2) if it is secondly differentiable. Therefore, we can use the condition (2) to represent the convexity of shapes. Before proving Theorem 1, we need to introduce some useful lemmas.

###### Lemma 2

Let be the boundary of a bounded open convex subset and be the corresponding SDF of . For any in , there exists a unique such that

 ∥x−Px∥2=infz∈¯¯¯Ω1∥x−z∥2=infz∈Γ∥x−z∥2=dist(x,Γ)=ϕ(x). (10)

In other words, the projection of an exterior point must belong to .

###### Proof

Since is complete, for any , there exists an element such that . If , then and the proof is trivial. If , we have . If which is open, there exists an open ball centered at with radius such that . Let . We can verify that and , which contradicts to the projection theorem [ciarlet2013linear, Theorem 4.3-1]. Therefore, must belong to and . What’s more, by the projection theorem, this is unique.

Actually, one can generalize this result to non-convex without difficulty. The next lemma can be directly derived from the definition of SDF.

###### Lemma 3

Let be the boundary of an open subset and be the corresponding SDF of . Then for any element and non-negative , the inequality is true if and only if , where .

A simple corollary of the Lemma 3 is that for any . Now, we can give the proof of Theorem 1 as follows:

###### Proof

We first assume is convex. Let and be any two elements in and . We would like to show that for any . We will divide the proof into three parts.

(i) First, if and are in , i.e., and . By Lemma 2, there exist unique and such that and . If , let , and we have

 ϕ(x0) ≤∥x0−y0∥2=∥θ(x1−y1)+(1−θ)(x2−y2)∥2 (11) ≤θ∥x1−y1∥2+(1−θ)∥x2−y2∥2 (12) =θϕ(x1)+(1−θ)ϕ(x2). (13)

If , we have .

(ii) If and are in , then is also in . Let . For any element , there exist and such that , where and . For example, one can take and .Then, we can rewrite as Since and , we have . Thus, and .

(iii) If and , there exist a such that . By the continuity of , there exist a where , , and for any . Denote . We can compute that

 ∥x3−y0∥2=∥(1−α)(x2−Px2)∥2=(1−α)ϕ(x2). (14)

By (ii), we have

 −(1−α)ϕ(x2)=−∥x3−y0∥2≤ϕ(y0)≤αϕ(x1)+(1−α)ϕ(Px2)=αϕ(x1). (15)

Consequently, we have . For any , , and then . Since and are in , we know that

 ϕ(x0) ≤1−θ1−αϕ(x3)+(1−1−θ1−α)ϕ(x1) (16) ≤1−θ1−α(αϕ(x1)+(1−α)ϕ(x2))+(1−1−θ1−α)ϕ(x1) (17) =θϕ(x1)+(1−θ)ϕ(x2). (18)

If , we can similarly derive that

 ϕ(x0)≤θαϕ(x3)+(1−θαϕ(x2))≤θϕ(x1)+(1−θ)ϕ(x2). (19)

Based on (i), (ii) and (iii), we can conclude that for any and in .

Conversely, If is convex in , by the definition of convex function, all the sublevel-sets of are convex, so is .

We have already proved that for any convex shape , its corresponding SDF must be a convex function. Consequently, must satisfy the second order condition where it is secondly differentiable. Note that a SDF usually is non-differentiable at a set of points with zero measure, so the second order condition holds almost everywhere in the continuous case. In the numerical computation, since we only care about the convexity of , to save the computational cost, we can only require the condition holds in a neighbourhood around the object boundary, i.e.,

 H(ϕ)≥0 in L−ϵ(|ϕ|)={x∈Rd||ϕ(x)|≤ϵ}, (20)

for some .

Lastly, using the Lemma 2 and Lemma 3, we can prove Lemma 1 as follows:

###### Proof

Suppose , then we would like to show that for any . If , by Lemma 2, we have

 ϕ1(x)=infz∈C1∥z−x∥2≥infz∈C2∥z−x∥2=ϕ2(x). (21)

If but , then . If , by Lemma 3, then we have

 ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯B(x,|ϕ1(x)|)⊆C1⊆C2⇒ϕ1(x)≥ϕ2(x). (22)

Therefore, we can conclude that in .

Conversely, if in , for any , which implies that .

## 4 Two variational models with convexity constraint

In this section, we will present the models for two applications involved convexity prior in details using the proposed convexity representation method.

### 4.1 Image segmentation with convexity prior

Given a digital image defined on a discrete image domain , the goal of image segmentation is to partition it into disjoint sub-regions based on some features. To achieve this goal, many algorithms have been developed in the literature. Here we consider a 2-phase Potts model [potts1952some] for segmentation:

 argmin{^Ωj}1j=01∑j=0∑xi∈^Ωjfj(xi)+1∑j=0G(∂Ωi), (23)

where are the corresponding region force of each class and is the regularization term of the class boundaries. In the 2-phase cases, usually is the object of interest and is the background.

In this paper, we want to consider the segmentation problem with the convexity prior, i.e., the object is known to be convex. Using the level-set representation and the results in last section, we can write the concerned segmentation problem as

 argminϕ ∑xi∈^Ω(f1(xi)−f0(xi))h(ϕ(xi))+g(xi)|∇h(ϕ(xi))|, (24) s.t. |∇ϕ(xi)|=1 in ^Ω, (25) H(ϕ(xi))≥0 in L−ϵ(|ϕ|), (26)

where denotes the Heaviside function and is and edge detector. denotes a smooth Gaussian kernel here. The first unitary gradient constraint can maintain to be a SDF and the second constraint is the convexity constraint. Since we only care about the convexity of the zero level-set, we can only impose this constraint in a neighbourhood around the zero level-set to save the computational effort.

There are many ways to define the data terms and . In this work, we choose to use a probability-based region force term as in [luo2018convex], where the data terms are computed from some prior information. Suppose we know the labels of some pixels as prior knowledge, then we denote as the set of labeled points belonging to phase 1 and as the set of labeled points belonging to phase 0. Define the similarity between any two points in as

 S(x,y)=exp{−a∥x−y∥22−b∥u(x)−u(y)∥22}, (27)

where we will set and in this work. Then, the probability of a pixel belonging to phase 1 can be computed as

 p1(x)=∑yi∈I1S(x,yi)∑yi∈I1∪I2S(x,yi), (28)

and the probability of belonging to phase 0 is . The region force term is then defined as . One can refer to [luo2018convex] for more details about the segmentation model. Therefore, the model (24) can be rewritten as

 argminϕ ∑xi∈^Ω[−log(p1(xi))+log(p0(xi))]h(ϕ(xi))+μg(xi)|∇h(ϕ(xi))| (29) s.t. |∇ϕ(xi)|=1 in ^Ω, (30) H(ϕ(xi))≥0 in L−ϵ(|ϕ|), (31) ϕ(xi)≤0 for any xi∈I1,ϕ(xi)≥0 for any xi∈I0. (32)

The numerical descritization scheme for the differential operators and will be introduced later in Section 5. Similar to [li2019convex], we assume that the input image is periodic in which implies that satisfies the periodic boundary condition on . Using the fact that , the last term in the objective functional can be written as where is the Dirac delta function. In the numerical computation, we will replace and by their smooth approximations:

 h(y)≈hα(y)=12+1πarctan(yα), (33) δ(y)≈h′α(y)=δα(y)=απ(y2+α2), (34)

where is a small positive number. The segmentation model (29) can also be directly generalized to high dimensional cases for object segmentation.

### 4.2 Convex hull model

Suppose we are given a hyper-rectangular domain and we want to find the convex hull of a subset . From the definition, we know that the convex hull is the smallest convex set containing . If we denote the set of all SDFs of convex subset in as , the SDF corresponding to the convex hull minimizes the zero sub-level set area (or volume) among :

 minϕ∈K∫Ω(1−h(ϕ(x)))dx. (35)

By Lemma 1, we can obtain an equivalent and simpler form:

 minϕ∈K∫Ω−ϕ(x)dx, (36)

which leads to the following discrete problem:

 minϕ ∑xi∈^Ω−ϕ(xi) (37) s.t. |∇ϕ(xi)|=1 in ^Ω, (38) H(ϕ(xi))≥0 in L−ϵ(|ϕ|), (39) ϕ(xi)≤0 for any xi∈X. (40)

This model is a simplified version of the convex hull models in [li2019convex] and can be applied to any dimensions. Here the first two constraints are the same with the segmentation model (29). The last constraint requires the zero sublevel-set of contain . Again, we assume satisfies the periodic boundary condition.

As we mentioned before, when the input data contains outliers, it is not appropriate to require to enclose all the given data in . Instead, we can use a penalty function to penalize large for all . By selecting appropriate parameters, we can find an approximated convex hull of the original set with high accuracy. The approximation model is given as:

 minϕ ∑xi∈^Ω−ϕ(xi)+λm(xi)R(ϕ(xi)) (41) s.t. |∇ϕ(xi)|=1 in ^Ω, (42) H(ϕ(xi))≥0 in L−ϵ(|ϕ|). (43)

where equals to 1 in and 0 elsewhere. Here we can choose to be the positive part function

 R(s)={ss>00s≤0, (44)

or its smooth approximation for some .

## 5 Numerical algorithm

Here we propose an efficient algorithm based on the alternating direction method of multipliers (ADMM) to solve the segmentation model (29) and the convex hull models (37) (41). We will also introduce the descritization scheme for the differential operators.

### 5.1 ADMM algorithm for the segmentation model and convex hull models

First, we write the three models (29), (37) and (41) in a unified framework:

 minϕ F(ϕ), (45) s.t. |ϕ(xi)|=1 in ^Ω, (46) H(ϕ(xi))≥0 in L−ϵ(|ϕ|), (47) ϕ∈S, (48)

where could be the objective functional in (29), (37) or (41), and is defined as

 {ψ:^Ω→R|ψ(xi)≤0 in I1,ψ(xi)≥0∈I0}. (49)

For the segmentation model (29), and are those labelled points. For the exact convex hull model (37), is the set and is empty. For the approximate convex hull model (41), both and could be empty. If we have any prior labels for the approximate convex hull model, we can also incorporate them into the models to make and non-empty.

To solve model (45), we introduce three auxiliary variables , , and . Then we can write the augmented Lagrangian functional as

 L(ϕ,p,Q,z,γ1,γ2,γ3)=∑xi∈^Ω{−ϕ(xi)+ρ12|∇ϕ(xi)−p(xi)|2+ρ22∥H(ϕ(xi))−Q(xi)∥2F (50) +ρ32|ϕ(xi)−z(xi)|2+⟨γ1(xi),∇ϕ(xi)−p(xi)⟩+⟨γ2(xi),H(ϕ(xi))−Q(xi)⟩F +γ3(xi)(ϕ(xi)−z(xi))}

with the following constraints: in , in and , where is specified by the problems on hand. Applying the ADMM algorithm, we can split the original problem into several sub-problems.

1. sub-problem:

 ϕt+1= argminϕ L(ϕ,pt,Qt,zt,γt1,γt2,γt3) = argminϕ ∑xi∈^Ω−ϕ+ρ12|∇ϕ−pt|2+ρ22∥H(ϕ)−Qt∥2F+ρ32|ϕ−zt|2 +⟨γt1,∇ϕ−pt⟩+⟨γt2,H(ϕ)−Qt⟩F+γt3(ϕ−zt).

Then the optimal must satisfy

 ∇∗(ρ1(∇ϕ−pt)+γt1)+H∗(ρ2(H(ϕ)−Qt)+γt2)+(ρ3(ϕ−zt)+γt3)=1,

where is the adjoint operator of and is the adjoint operator of . It is equivalent to

 −ρ1Δϕ+ρ2H∗(H(ϕ))+ρ3ϕ=rhst1, (51)

where

is a constant vector here. Notice that

, and then the fourth-order PDE can be rewritten as

 ρ2Δ2ϕ−ρ1Δϕ+ρ3ϕ=rhst1, (52)

where is a constant vector here. Since satisfies the periodic boundary condition, we can apply FFT as [li2019convex] to solve it.

2. sub-problem:

 pt+1= argmin|p|=1 L(ϕt+1,p,Qt,zt,γt1,γt2,γt3) = argmin|p|=1 ∑xi∈^ωρ12|∇ϕt+1−p|2+⟨γt1,∇ϕt+1−p⟩ = γt1/ρ1+∇ϕt+1|γt1/ρ1+∇ϕt+1|. (53)
3. sub-problem:

 Qt+1= argminQ∈L−ϵ(|ϕ|) L(ϕt+1,pt+1,Q,zt,γt1,γt2,γt3) = argminQ∈L−ϵ(|ϕ|) ∑xi∈^Ωρ22∥H(ϕt+1)−Q∥2F+⟨γt2,H(ϕt+1)−Q⟩F = {γt2/ρ2+H(ϕt+1) for x∉L−ϵ(|ϕ|)ΠPSD{γt2/ρ2+H(ϕt+1)} for x∈L−ϵ(|ϕ|). (54)

The computation of this projection is very simple and can be done by the same way with [rosman2014fast]. Given a real symmetric matrix

, suppose it’s singular value decomposition is

, where is orthonormal and is a diagonal matrix with all the singular values of on its diagonal entries. We further assume that and . If we define

 Λ+=diag(max{λ1,0},…,max{λn,0}), (55)

then the projection of onto the set of all symmetric positive semi-definite (SPSD) matrices under the Frobenius norm is . The proof can be found in [higham1988matrix].

4. sub-problem:

 zt+1= argminz∈S L(ϕt+1,pt+1,Qt+1,z,γt1,γt2,γt3) (56) = argminz∈S∑xi∈^Ωρ32|ϕt+1−z|2+γt3(ϕt−z) (57) = ⎧⎪⎨⎪⎩min{0,γt3/ρ3+ϕt+1}x∈I1max{0,γt3/ρ3+ϕt+1}x∈I0γt3/ρ3+ϕt+1x∉I1∪I0. (58)

We summarize this procedure in Algorithm 1.

### 5.2 Numerical descritization scheme

Suppose we are given a volumetric data which is a binary function defined on the discrete domain . can also be viewed as a characteristic function of a subset . Each mesh point in can be represented by a d-tuple where is an integer in for . To compute the convex hull of using the algorithm described before, we need first to approximate the differential operators numerically. For any function , we denote and as

 ∂+iϕ(x)={ϕ(x1,…,xi+1,…,xd)−ϕ(x1,…,xi,…,xd)xi