1 Introduction
Distinct intensity appearances and smooth contrastaligned boundaries are standard segmentation cues. However, in most real applications of image segmentation there are multiple objects of interest with similar or overlapping color appearances. Intensity edges also could be cluttered or weak. These common practical problems require additional regularization, as illustrated in the second row of Figure 1.
two examples of images with seeds (medical and photo)  
multiobject segmentation using Potts model  
multiobject segmentation adding our hedgehog shapes prior 
There are multiple methodologies for enforcing shape regularity or shape priors. For example, Statistical Shape Models (SSM) [26, 6, 1] and Deformable Shape Models (DSM) [17]
differ in their shape space representation and their distance measures between a given segmentation and the learned shape space. SSM applies principal component analysis to a training dataset for fitting the shape space distribution represented by a mean shape and the modes of greatest variance. Any given segmentation could be penalized based on how well it aligns to this shape space
[26] or it could be restricted to the learned shape space [6]. Mrep [17]is a coarsetofine discrete DSM approach. In contrast to basic userscribbles like in our method, Mreps requires detailed user inputs defining a figural shape model for each segment. They also need training data to estimate their model parameters. SSM and DSM assume a fixed shape topology, which is often violated by specific problem instances,
e.g. lesions, tumors, or horseshoe kidneys [23].Our paper proposes a simple and sufficiently general shape regularization constraint that could be easily integrated into standard MRF methods for segmentation. Shape priors have been successfully used in binary graph cut segmentation [14, 25, 10]. While our “hedgehog” shape prior is a generalization of the popular starconvexity constraint [25] with several merits over previous extensions [14, 10], our main contribution is a multihedgehog prior in the context of multiobject segmentation problems.
We observe that similarity between object appearances and edge clutter are particularly problematic in larger multilabel segmentation problems, e.g. in medical imaging. Our multihedgehog prior is fairly flexible, has efficient optimizers, and shows significant potential in resolving very common ambiguities in multilabel segmentation problems, see Fig.1 (last row). Our general multiobject segmentation framework allows to enforce “hedgehog” shape prior for any of the objects. The class of all possible hedgehog priors is sufficiently representative yet each specific hedgehog constraint offers sufficient regularization to address color overlap and weak/cluttered edges. One extreme case of our prior is closely related to the standard star shape prior [25]. The other extreme case allows shapes with restricted skeletons [18, 24].
The main contribution of our work is a practical and efficient way to combine distinct shape priors for segments in popular multilabel MRF framework [5]. Our work also allows to extend previous multisurface graph cut methods [16, 7]. For example, [16] compute multiple nested segments using one fixed polar grid defined by some nonoverlapping rays. Besides particular image discretization, these rays introduce two constraints: one starlike shape constraint shared by the nested segments and a smoothness constraint penalizing segment boundary jumping between adjacent rays. In contrast, our method defines independent shape constraints for each segment. Similarly to [14], shape normals are constrained by arbitrary vector fields, rather than nonoverlapping rays [16] or trees [25, 10]. Our use of Cartesian grid allows to enforce standard boundary length smoothness [3]. While this paper is focused on a Potts model with distinct shape constrains, hedgehog shapes can be easily combined with intersegment inclusion or exclusion constraints [7]. The use of distinct (not necessarily nested) shape priors extends the range of applications in [16].
Our Potts framework optimization algorithm is closely comparable with a special nonsubmodular case of [7] with exclusion constraints. While we use independent shape constrains for each segment, these are easy to integrate into each layer in [7]. More importantly, instead of binary multilayer formulation with nonsubmodular potentials, we use multilabel formulation amenable to expansion. Besides memory savings, our approach solves the nonsubmodular segmentation problem with a guaranteed approximation quality bound. Section 4 discusses relations to [16, 7] in more details.
Overview of contributions: We propose a new multilabel segmentation model and the corresponding optimization algorithm. Our contributions are summarized below.

hedgehog shape constraint  a new flexible method for segment regularization based on simple and intuitive user interactions.

new multiobject segmentation energy with multihedgehog shape priors.

we provide an extension of expansion moves [5] for the proposed energy.

experimental evaluation showing how our multiobject segmentation method solves problematic cases for the standard Potts model [5].
The rest of the paper is organized as follows. Section 2 defines our hedgehog shape prior for a simpler case of binary segmentation of one object. We discuss its properties and show how it can be globally optimized with graph cuts. Section 3 defines multihedgehog shape constraint in the context of multilabel MRF segmentation and proposes an extension of expansion optimization algorithm. Our experiments in Section includes multiobject segmentation of real photos and 3D multimodal medical data.
2 Hedgehog shape constraint for one object
This section describes our hedgehog shape prior for a single object in case of binary segmentation. Section 3 describes a more general multihedgehog segmentation prior where multiple objects can have separate hedgehog constraints. While multihedgehog prior helps in a much wider range of problems, e.g. in medical imaging, binary segmentation with one “hedgehog” is easier to start from and it has merits on its own. In particular, singlehedgehog prior generalizes popular starconvexity [25] differently from other related methods [14, 10] in binary segmentation.
(a) scribble’s distance map  (b) constraint 
Similarly to star prior [25], hedgehog prior could be defined interactively. Instead of a single click in the star center, hedgehog shape allows an arbitrary scribble roughly corresponding to its skeleton. Hedgehog can also be defined by an approximate userdefined outline of a desired shape or by a shape template. In any case, such scribble, outline, or template define the corresponding (signed) distance transform or distance map and a field of its gradients , as illustrated in Fig.2. Our hedgehog constraint for segment is defined by vector field and angular threshold restricting orientations of surface normals at any point on the boundary of to a cone
(1) 
assuming gradient is defined at . More generally, hedgehog constraint for segment could be defined by any given vector field defining preferred directions for surface normals. Similarly to [14], we can use dot product to define allowed normals cones where width varies depending on the magnitude of . In case this constraint reduces to (1) for since at all points where gradient exists.
2.1 Single hedgehog properties
Even a single hedgehog shape prior discussed in this section could be useful in practice. For example, if it closely approximates a popular star convexity [25] in case of a single click. However, our formulation uses locally defined constraints, which can be approximated by a simple rule for selecting local edges, see Section 2.2. Unlike [25], we do not enforce a global tree structure, see Fig.3(b). Also, like [14, 10] hedgehog prior allows a much larger variability of shapes for scribbles different from a point. In our case, a scribble defines a rough skeleton of a shape. For example, for smaller values of our cone constraints (1) give a tighter alignment of surface normals with vectors forcing the segment boundary to closely follow the levelsets of the scribble’s distance map , see Fig.5. In the limit, this implies consistency of segment’s skeleton with the skeleton of the given scribble, outline, or template.
2.2 Single hedgehog via graph cuts
We show an approximation for hedgehog constraint (1) for object in the context of binary Ndimensional image segmentation via graph cuts [2]. All cone constraints (1) for any given and distance map gradients , see Fig.3(a), correspond to a certain set of infinity cost directed edges, see Fig.3(bd). For example, consider cone of allowed surface normals at some point illustrated in Fig.4 for two different values of parameter . It is easy to see that a surface/boundary of segment passing at has normal iff this surface does not cross the corresponding polar cone
(2) 
This reformulation of our hedgehog constraint (1) is easy to approximate via graph cuts by setting infinity cost to all directed edges adjacent to whose directions agrees with polar cone , see Fig.4. To avoid clutter, the figure only shows such directed edges starting at , but one should also include similarly oriented directed edges pointing to . The set of all directed graph edges consistent with local polar cones orientations, see Fig.3(bd), is
(3) 
Obviously, hedgehog constraints are better approximated by larger neighborhood systems , e.g. 32neighborhood works better than 8neighborhood, see Fig.5(b,c).
The used vector field has a direct effect on the set of allowed shapes when varying . Figure 6 shows the segmentation result for varying for two different vector fields on the same synthetic example.
(a) wide cone of normals  (b) tight cone of normals 
(a) image and user scribble (blue)  (b) 8neighborhood  (c) 32neighborhood 
(a) image and user scribble (blue)  (b) Euclidean Distance Transform gradient  (c) synthetic vector field 
3 Multihedgehog segmentation energy
Given a set of pixels , neighborhood system , and labels our multilabeling segmentation energy is
(5) 
where is a labeling.
The first two terms, namely data and smoothness
terms, are widely used in computer vision, e.g.
[4, 2, 19]. The data term commonly referred to as the regional term as it measures how well pixels fit into their corresponding labels. To be specific, ) is the penalty for assigning label to pixel . Similar to [19], a label’s probabilistic model, a Gaussian Mixture in our case, is found by fitting a probabilistic model to the seeds given by the user.The smoothness term is a standard pairwise regularizer that discourages segmentation discontinuities between neighboring pixels. A discontinuity occurs whenever two neighboring pixels are assigned to different labels. In its simplest form, where is Iverson bracket and is a nonincreasing function of the intensities at and . Also, is a parameter that weights the importance of the smoothness term.
Third term, our contribution, is the Hedgehog term
(6) 
where . Those familiar with graph cuts may prefer to think of it as an cost arc from to , thus prohibiting any cut that satisfy and .
The Hedgehog term is the sum of the Hedgehog constraints over all the labels and it guarantees that any feasible labeling^{1}^{1}1We use feasible (and not bound) because there is at least one trivial solution with finite cost. In practice, it is practical to assume that one of the labels, e.g. background label, does not require enforcing shape constraints otherwise the problem could become overconstrained. One trivial solution is to label all pixels as background except those labeled by user scribbles., i.e. , will result in a segmentation with surface normals respecting the orientation constraints (1). Notice that (6) reduces to [25] when , and the shape constraints are defined for only one of the labels by a single pixel.
3.1 Expansion Moves
In this section we will describe how to extend the binary expansion moves of exp [5] to respect the shape constraints, and show that these moves are submodular. The main idea of exp algorithm is to maintain a current feasible labeling , i.e. , and iteratively move to a better labeling until no improvements could be made. To be specific at each iteration, a label is chosen and variables for all are given a binary choice ; 0 to retain their old label or 1 switch to , i.e. .
The Hedgehog term (6) for a binary exp move could be written as
(7) 
where and
(8) 
The first term in (7) guarantees that the resulting labeling respects label hedgehog constraints. In addition, the second term guarantees that the hedgehog constraints satisfied by the current labeling for all labels in are not violated by the new labeling .
According to [15], any firstorder binary function could be exactly optimized if all pairwise terms are submodular. A binary function of two variables is submodular if . Our energy (7) is submodular as it could be written as the sum of submodular pairwise binary energies over all possible pairs of and . Notice that for any given pair, by construction and as long as the current labeling is a feasible one, i.e. it does not cut any of the cost arcs. Also, and are both by construction. Therefore, the submodularity condition is satisfied for all pairs of and .
(a) initial Seeds  (b) current labeling 
(c) (7) first term constraints  (d) (7) second term constraints 
(e) feasible expansion move  (f) infeasible expansion move 
Fig.7 shows an example of an exp move over the green label. We assume the shape constraints only for the green and purple labels. Fig.7(a) shows the initial seeds for three different labels while (b) shows the current feasible labeling. Fig.7(cd) show the shape constraints enforced by green and purple labels while expanding the green label. Note, the green shape constraints are enforced all over the image while the purple shape constraints are enforced inside its current labeling support area, as it is not necessarily to enforce it everywhere. Fig.7(e) shows a feasible move that respects the green and purple shape constraints while (f) shows an infeasible move that respects only the green shape constraints.
4 Relation to multisurface graph cuts
Our work could be related to multiobject segmentation methods [16, 7] combining various forms of boundary regularization and interactions between the surfaces. In particular, Logismos [16] computes nested segments using polar grid layers (one per segment) as in Fig.8(b). In general, edges between the layers enforce intersurface constraints like minimum and maximum distances between the surfaces along each ray. For these constraints to work, the polar grids should be the same at all layers. Edges within each polar grid enforce regularity of the corresponding segment. Figure 8(b) details the construction. Red edges penalize interray surface jumps^{2}^{2}2In polar representation, let each segment correspond to a labeling assigning distance form the pole to the segment boundary along ray . Interray smoothness corresponds to any convex pairwise potential as in Ishikawa [12]. Similarly, interlayer edges enforcing some min and max distances along each ray are a special case of convex potential . and infinity cost green edges enforce a shape prior analogous to star convexity [25].
Cartesian discretization approach  polar discretization approach 
(a) Two identical hedgehogs with inclusion constraint [7]  (b) Two identical nested stars as in Logismos [16] 
(c) Two distinct hedgehogs with inclusion constraint [7]  (d) Extended multipolar Logismos with distinct stars 
If considering only one segment, our hedgehog shape prior is closely related to both Logismos and star convexity. The use of Cartezian grid makes our approach closer to methods [14, 25] already discussed in Sec.2. Our prior is defined by a vector field, see Fig.3(a), instead of a polar system of nonoverlapping rays [16] requiring considerable care during construction. Each vector at any of our grid pixels defines a cone of allowed surface normals, see Fig.4, controlled by width parameter . In particular, tighter cones enforce skeleton consistency. While our dual cone of infinity cost edges resembles a combination of green and red edges in each polar layer of Logismos, our geometrically motivated Cartesian approach uses simpler vector fields generalizing nonoverlapping rays and does not require highly nonuniform polar resampling of images. In fact, our graph construction is technically different from Logismos, as evident from our discretization details presented in the Appendix.
Also, there are more substantial differences between our multihedgehogs method and Logismos. The latter enforces one star model for all nested shapes since it uses the same polar grids. In contrast, we do not require nested segments and allow independent shape priors at each segment. Our current approach does not enforce any geometric intersegment distances. Thus, it can be seen as an augmentation of the standard Potts model with independent shape priors for each segment. However, the following two subsections discuss certain extensions of our multihedgehog approach and Logismos that make them more comparable.
4.1 Hedgehogs with intersegment constraints
If additional geometric intersegment constraints are needed, our hedgehog shapes could be easily integrated with the isotropic Cartesian formulations for the inclusion, minimum margin, exclusion [7] and Hausdorf distance [21]. For example, Fig.8(a) illustrates a layered graph construction enforcing zeromargin inclusion for two segments [7] (brown edges) combined with the same hedgehog shape prior (black edges, as in Fig.4) defined by identical vector fields (blue) at two layers. It is also easy to switch to distinct shape priors for each segment by using different vector fields, see Fig.8(c).
Interestingly, replacing inclusion by nonsubmodular exclusion constraint between the layers [7] makes the corresponding model conceptually close to our Potts approach with multihedgehog priors. Thus, our multilabel optimization by expansion on a singlelayer graph in Sec.3.1 can be seen as an alternative to QPBO [20], TRWS [22], or other standard approximate optimization methods [13] applicable to binary nonsubmodular multilayered graphical model in [7]. For significant memory savings and potential speed gains, it is possible to reformulate geometric intersegment constraints in [7] as multilabel segmentation potentials that can be addressed with efficient approximate algorithms on one imagegrid layer, e.g. expansion, message passing, or other methods.
4.2 Multipolar Logismos with distinct shape priors
It is interesting to consider an option of different polar grids at each layer of Logismos as in Fig.8(d) that makes it comparable to multiobject segmentation [7] with distinct hedgehog shape priors (c) discussed in the previous subsection. While such multipolar extension of Logismos can provide distinct shape priors for the segments, it raises questions about the interlayer interactions and their geometric interpretation. First, there is a minor problem of misalignment between the polar grid nodes. However, a bigger problem is the misalignment between the rays that calls for a revision of the nestedness and alongtheray distance constraints between the surfaces. If no nestedness is needed, than it is necessary to add nonsubmodular consistency constraints between the layers, i.e. exclusion [7]. If nestedness is still required, then simple intersurface distance constraints in [16] are possible, but the minimum distance would be enforced along the rays of the smaller segment layer and the maximum distance would be along the rays of the larger segment. This discrepancy may be acceptable if the polar systems and the corresponding shape priors are close, but larger shape differences call for more isotropic definitions of intershape distances [7, 21] that are independent of polar discretization.
5 Experiments
(a) initial seeds  (b) Potts model 
(c) Potts model  (d) Hedgehogs + Potts 
(a) initial seeds  (b) Potts model 
(c) MultiStar + Potts model  (d) Hedgehogs + Potts model 
In the following set of experiments we show the benefit of incorporating our Hedgehogs term (6) to the well studied Potts model segmentation energy, i.e. data term + smoothness term, for multiobject segmentation in 2D and 3D. We will also give an illustrative real life example to show that the hedgehog shape is more general than starshape [25]. The results shown in this section for our method were generated using when computing the hedgehog shape constraints, also we did not enforce any shape constraints on the background model. Also, the same smoothness weight is used when comparing methods unless stated otherwise.
Our optimization framework is similar to [19] where the user marks a set of initial seeds in the form of a scribble for the required labels, e.g. left kidney, right kidney etc. The seeds for each label were used to fit an initial Gaussian Mixture color model, and to generate its hedgehog shape constraints. Similarly to [11, 8], we iteratively optimize our energy (5) (or Potts model) in an EMstyle iterative fashion. We alternate between finding a better segmentation and reestimating the color models using the current segmentation. The framework terminates when it can not decrease the energy anymore.
For the example shown in Fig.9(a), (bc) show Potts model results for and 6, respectively. It should be noted that is the smallest smoothness weight that did not result in oversegmentation when using Potts. However, the result in Fig.9(c) is biased towards smaller objects (notice star tips) because by increasing the smoothness weight we are also increasing the shrinking bias. Oversegmented results as the one in Fig.9(b) could be avoided without increasing the shrinking bias, simply by incorporating multishape priors. Our method which incorporates Hedgehogs shape priors with Potts model was able to find a better segmentation, see Fig.9(d).
The objective of the example shown in Fig.10(a) is to segment left and right lungs, and the background. Potts model result shown in Fig.10(b) has holes, i.e. part of the background appears in the middle of the lungs. Furthermore, Potts model converged to biased color models where the right lung preferred brighter colors while the left preferred darker colors. Similar to the previous example, increasing for Potts model will increase the shrinking bias and it becomes hard to segment the elongated part of the the right lung. Using multistar which is a generalization of [25] to multiobject segmentation is not enough because the right lung is not a starshape. To be specific, there is no point inside the right lung that could act as a center of a starshape that would include it. Fig.10(d) shows the result for our method, where user scribbles were used to enforce shape constraints compared to using a single pixel per label [9].
We applied our method on PETCT scans of three different subjects to segment their liver, left kidney, right kidney and the background. Although we applied our method and Potts model on the 3D volumes we only show the results on a few representative slices from each volume in Fig.11. Also, the results of different methods for each subject were computed using the same smoothness. We can see from the last two rows which compare our method to Potts, using Hedgehogs constraints enabled us to avoid geometrically incorrect segmentations, e.g. one liver inside the other (lastrow middle), or parts of left kidney is between the right kidney and liver (lastrow right). Furthermore, for test subjects 1 and 2 the kidneys and background were poorly segmented by Potts model, e.g. most of the kidneys were segmented as background for test subject 1. Potts poor performance is due to the large overlap between the kidneys and background color models. This overlap resulted in an indiscriminative data term for Potts to properly separate them. This issue becomes worse in iterative frameworks where color models are reestimated based on current segmentation. To be specific, if at any iteration Potts model resulted in a bad segmentation then reestimating the color models will bias them towards the bad segmentation and subsequent iterations worsen the results. Comparing our results for subjects 1 and 2 to Potts model shows that our method is less prone to the aforementioned issue as we forbid undesirable segmentations, i.e. those that do not respect shape constraints.
Subject 1  Subject 2  Subject 3  
{362.5mm  
Our method (Hedgehogs Shapes + Potts) 

}1910pt
Same Slice 

Potts 
For quantitative comparison, Table 1 lists for each organ of a subject the Score, Precession and Recall measures of our method and Potts model where For the kidneys, our method clearly out performed Potts model, e.g. note Potts model poor precision/recall for subjects 1 and 2. For the liver, both methods performed comparably.
Subject 1  Subject 2  Subject 3  
Ours  Potts  Ours  Potts  Ours  Potts  
Right Kidney  
score  
Prec.  
Recall  
Left Kidney  
score  
Prec.  0.34  
Recall  
Liver  
score  
Prec.  0.96  
Recall 
score, precision and recall measures for each method, individual organ and subject—the closer these values are to 1 the more accurate the segmentation is. For the kidneys where most of the color model overlap occurs, our method was a clear winner. For the liver which has a bigger volume and a more distinct color model compared to the kidneys/background, the two methods performed comparably.
6 Conclusion
We proposed a novel interactive multiobject segmentation method where objects are restricted by hedgehog shapes. The hedgehog shape constraints of an object limits its set of possible segmentations by restricting the segmentation’s allowed surface normal orientations. Hedgehog shape constraints could be derived from some vector field, e.g. the gradient of a user scribble distance transform. In addition, we showed how to modify expansion moves to optimize our multilabeling problem with hedgehog constraints. We also proved submodularity of the modified binary expansion moves. Furthermore, we applied our multilabeling segmentation with hedgehog shapes on 2D images and 3D medical volumes. Our experiments show the significant improvement in segmentation accuracy when using our method over Potts model. Specially in medical data where our method outperformed Potts model in separating multiple organs with similar appearances and weak edges.
Appendix: Discretization Issues
There are some challenges/drawbacks due to the discretization of hedgehog constraints. For example, the number of representable surface orientations depends on the chosen neighborhood system which could be remedied by using larger neighborhood systems. Also it is possible for a polar cone to be under represented by if it happens that no edges lie in it which could result in a segmentation surface with folds. Furthermore, in cases where the vector field changes relatively fast w.r.t. the image resolution it is possible for neighboring pixel’s hedgehog constraints to conflict.
(a) Empty Cone  (b) Underrepresented cone 
(c)  (d) Alternative 
Cone underrepresentation:
a pixel’s polar cone is under represented by in two case: (a) “empty cone ” when there are no neighbor edges consistent with the polar cone as shown in Fig. 12(a), and (b) when there is a large part of the cone unaccounted for, see Fig. 12(b) where the big cone is accounted for by only one edge. Based on our practical experience only ignoring the former case is of significant consequences while ignoring later does not adversary affect the results.
Empty cone issue could be alleviated by increasing the neighborhood size. However, this is not practical because for the neighborhood edge has to perfectly align with the surface normal, see Fig. 12(c). Alternatively, we propose adding to the nearest neighborhood edge to the empty cone as shown in Fig.12(d).
(a) overconstrained  (b) with higher resolution  (c) after edge pruning 
were eliminated when they were inconsistent with the interpolated vector field (shown in light blue).
(a) vector field with rapid changes  (b)  (c) after pruning 
Fast changing vector field:
hedgehog prior (3) enforces the shape constraints at every pixel independently. When the vector field orientation changes rapidly between neighboring pixels the resulting shape constraints could become contradictory leading to overconstraining. As can be seen in Fig. 13(a) the contradictory shape constraints resulted in a construction where no surface could path between the four neighboring pixels, i.e. all of them will either be labeled foreground or background.
One possible way to overcome the fast changing vector fields is to increase the image resolution via subsampling. As you can see in Fig. 13(b) doubling the resolution alleviated the aforementioned issue. However, there is no simple answer to at which resolution there will be no overconstraining, as it will depend on , and the vector field. Also, increasing the image resolution is not a practical solution as it adversely affects the running time.
Alternatively one can try an resolve contradicting constraints by pruning . In this case, we interpolate the vector field’s orientation for every neighboring pixels and eliminate their edge constraint(s) if it were not consistent with the interpolated orientation, as shown in Fig. 13(c). Fig14 shows a synthetic example of fast chancing vector field orientations and how the edge constraints pruning alleviates the overconstraining problem.
References
 [1] S. Andrews, C. McIntosh, and G. Hamarneh. Convex multiregion probabilistic segmentation with shape prior in the isometric logratio transformation space. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2096–2103. IEEE, 2011.
 [2] Y. Boykov and M.P. Jolly. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In ICCV, volume I, pages 105–112, July 2001.
 [3] Y. Boykov and V. Kolmogorov. Computing geodesics and minimal surfaces via graph cuts. In International Conference on Computer Vision, volume I, pages 26–33, 2003.
 [4] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. In International Conference on Computer Vision, volume I, pages 377–384, 1999.
 [5] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE transactions on Pattern Analysis and Machine Intelligence, 23(11):1222–1239, November 2001.

[6]
D. Cremers, F. R. Schmidt, and F. Barthel.
Shape priors in variational image segmentation: Convexity, lipschitz
continuity and globally optimal solutions.
In
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on
, pages 1–6. IEEE, 2008.  [7] A. Delong and Y. Boykov. Globally Optimal Segmentation of MultiRegion Objects. In International Conference on Computer Vision (ICCV), 2009.
 [8] A. Delong, A. Osokin, H. Isack, and Y. Boykov. Fast Approximate Energy Minization with Label Costs. International Journal of Computer Vision (IJCV), 96(1):1–27, January 2012.
 [9] P. F. Felzenszwalb and O. Veksler. Tiered scene labeling with dynamic programming. In Computer Vision and Pattern Recognition (CVPR), 2010.
 [10] V. Gulshan, C. Rother, A. Criminisi, A. Blake, and A. Zisserman. Geodesic star convexity for interactive image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2010.
 [11] H. N. Isack and Y. Boykov. Energybased Geometric MultiModel Fitting. International Journal of Computer Vision (IJCV), 97(2):123–147, April 2012.
 [12] H. Ishikawa. Exact optimization for Markov Random Fields with convex priors. IEEE transactions on Pattern Analysis and Machine Intelligence, 25(10):1333–1336, 2003.
 [13] J. Kappes, B. Andres, F. Hamprecht, C. Schnorr, S. Nowozin, D. Batra, S. Kim, B. Kausler, J. Lellmann, N. Komodakis, et al. A comparative study of modern inference techniques for discrete energy minimization problems. In Computer Vision and Pattern Recognition (CVPR), pages 1328–1335, 2013.
 [14] V. Kolmogorov and Y. Boykov. What metrics can be approximated by geocuts, or global optimization of length/area and flux. In International Conference on Computer Vision, October 2005.
 [15] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts. In 7th European Conference on Computer Vision, volume III of LNCS 2352, pages 65–81, Copenhagen, Denmark, May 2002. SpringerVerlag.
 [16] K. Li, X. Wu, D. Z. Chen, and M. Sonka. Optimal surface segmentation in volumetric imagesa graphtheoretic approach. IEEE transactions on Pattern Analysis and Pattern Recognition (PAMI), 28(1):119–134, January 2006.
 [17] S. M. Pizer, P. T. Fletcher, S. Joshi, A. Thall, J. Z. Chen, Y. Fridman, D. S. Fritsch, A. G. Gash, J. M. Glotzer, M. R. Jiroutek, et al. Deformable mreps for 3d medical image segmentation. International Journal of Computer Vision, 55(23):85–106, 2003.
 [18] S. Pizer et al. Deformable MReps for 3D Medical Image Segmentation. International Journal of Computer Vision (IJCV), 55(23):85–106, November 2003.
 [19] C. Rother, V. Kolmogorov, and A. Blake. Grabcut  interactive foreground extraction using iterated graph cuts. In ACM transactions on Graphics (SIGGRAPH), August 2004.
 [20] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary mrfs via extended roof duality. In Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2007.
 [21] F. Schmidt and Y. Boykov. Hausdorff distance constraint for multisurface segmentation. In European Conference on Computer Vision (ECCV), LNCS 7572, volume 1, pages 598–611, Florence, Italy, October 2012.
 [22] T. Schoenemann and V. Kolmogorov. Generalized sequential treereweighted message passing. Advanced Structured Prediction, page 75, 2014.
 [23] K. Siddiqi and S. Pizer. Medial representations: mathematics, algorithms and applications, volume 37. Springer Science & Business Media, 2008.
 [24] K. Siddiqi and S. Pizer. Medial Representations: Mathematics, Algorithms and Applications. Springer, December 2008.
 [25] O. Veksler. Star shape prior for graphcut image segmentation. In European Conference on Computer Vision (ECCV), 2008.
 [26] N. Vu and B. Manjunath. Shape prior segmentation of multiple objects with graph cuts. In Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2008.