Distinct intensity appearances and smooth contrast-aligned boundaries are standard segmentation cues. However, in most real applications of image segmentation there are multiple objects of interest with similar or overlapping color appearances. Intensity edges also could be cluttered or weak. These common practical problems require additional regularization, as illustrated in the second row of Figure 1.
|two examples of images with seeds (medical and photo)|
|multi-object segmentation using Potts model|
|multi-object segmentation adding our hedgehog shapes prior|
differ in their shape space representation and their distance measures between a given segmentation and the learned shape space. SSM applies principal component analysis to a training dataset for fitting the shape space distribution represented by a mean shape and the modes of greatest variance. Any given segmentation could be penalized based on how well it aligns to this shape space or it could be restricted to the learned shape space . M-rep 
is a coarse-to-fine discrete DSM approach. In contrast to basic user-scribbles like in our method, M-reps requires detailed user inputs defining a figural shape model for each segment. They also need training data to estimate their model parameters. SSM and DSM assume a fixed shape topology, which is often violated by specific problem instances,e.g. lesions, tumors, or horse-shoe kidneys .
Our paper proposes a simple and sufficiently general shape regularization constraint that could be easily integrated into standard MRF methods for segmentation. Shape priors have been successfully used in binary graph cut segmentation [14, 25, 10]. While our “hedgehog” shape prior is a generalization of the popular star-convexity constraint  with several merits over previous extensions [14, 10], our main contribution is a multi-hedgehog prior in the context of multi-object segmentation problems.
We observe that similarity between object appearances and edge clutter are particularly problematic in larger multi-label segmentation problems, e.g. in medical imaging. Our multi-hedgehog prior is fairly flexible, has efficient optimizers, and shows significant potential in resolving very common ambiguities in multi-label segmentation problems, see Fig.1 (last row). Our general multi-object segmentation framework allows to enforce “hedgehog” shape prior for any of the objects. The class of all possible hedgehog priors is sufficiently representative yet each specific hedgehog constraint offers sufficient regularization to address color overlap and weak/cluttered edges. One extreme case of our prior is closely related to the standard star shape prior . The other extreme case allows shapes with restricted skeletons [18, 24].
The main contribution of our work is a practical and efficient way to combine distinct shape priors for segments in popular multi-label MRF framework . Our work also allows to extend previous multi-surface graph cut methods [16, 7]. For example,  compute multiple nested segments using one fixed polar grid defined by some non-overlapping rays. Besides particular image discretization, these rays introduce two constraints: one star-like shape constraint shared by the nested segments and a smoothness constraint penalizing segment boundary jumping between adjacent rays. In contrast, our method defines independent shape constraints for each segment. Similarly to , shape normals are constrained by arbitrary vector fields, rather than non-overlapping rays  or trees [25, 10]. Our use of Cartesian grid allows to enforce standard boundary length smoothness . While this paper is focused on a Potts model with distinct shape constrains, hedgehog shapes can be easily combined with inter-segment inclusion or exclusion constraints . The use of distinct (not necessarily nested) shape priors extends the range of applications in .
Our Potts framework optimization algorithm is closely comparable with a special non-submodular case of  with exclusion constraints. While we use independent shape constrains for each segment, these are easy to integrate into each layer in . More importantly, instead of binary multi-layer formulation with non-submodular potentials, we use multi-label formulation amenable to -expansion. Besides memory savings, our approach solves the non-submodular segmentation problem with a guaranteed approximation quality bound. Section 4 discusses relations to [16, 7] in more details.
Overview of contributions: We propose a new multi-label segmentation model and the corresponding optimization algorithm. Our contributions are summarized below.
hedgehog shape constraint - a new flexible method for segment regularization based on simple and intuitive user interactions.
new multi-object segmentation energy with multi-hedgehog shape priors.
we provide an extension of -expansion moves  for the proposed energy.
experimental evaluation showing how our multi-object segmentation method solves problematic cases for the standard Potts model .
The rest of the paper is organized as follows. Section 2 defines our hedgehog shape prior for a simpler case of binary segmentation of one object. We discuss its properties and show how it can be globally optimized with graph cuts. Section 3 defines multi-hedgehog shape constraint in the context of multi-label MRF segmentation and proposes an extension of -expansion optimization algorithm. Our experiments in Section includes multi-object segmentation of real photos and 3D multi-modal medical data.
2 Hedgehog shape constraint for one object
This section describes our hedgehog shape prior for a single object in case of binary segmentation. Section 3 describes a more general multi-hedgehog segmentation prior where multiple objects can have separate hedgehog constraints. While multi-hedgehog prior helps in a much wider range of problems, e.g. in medical imaging, binary segmentation with one “hedgehog” is easier to start from and it has merits on its own. In particular, single-hedgehog prior generalizes popular star-convexity  differently from other related methods [14, 10] in binary segmentation.
|(a) scribble’s distance map||(b) constraint|
Similarly to star prior , hedgehog prior could be defined interactively. Instead of a single click in the star center, hedgehog shape allows an arbitrary scribble roughly corresponding to its skeleton. Hedgehog can also be defined by an approximate user-defined outline of a desired shape or by a shape template. In any case, such scribble, outline, or template define the corresponding (signed) distance transform or distance map and a field of its gradients , as illustrated in Fig.2. Our hedgehog constraint for segment is defined by vector field and angular threshold restricting orientations of surface normals at any point on the boundary of to a cone
assuming gradient is defined at . More generally, hedgehog constraint for segment could be defined by any given vector field defining preferred directions for surface normals. Similarly to , we can use dot product to define allowed normals cones where width varies depending on the magnitude of . In case this constraint reduces to (1) for since at all points where gradient exists.
2.1 Single hedgehog properties
Even a single hedgehog shape prior discussed in this section could be useful in practice. For example, if it closely approximates a popular star convexity  in case of a single click. However, our formulation uses locally defined constraints, which can be approximated by a simple rule for selecting local edges, see Section 2.2. Unlike , we do not enforce a global tree structure, see Fig.3(b). Also, like [14, 10] hedgehog prior allows a much larger variability of shapes for scribbles different from a point. In our case, a scribble defines a rough skeleton of a shape. For example, for smaller values of our cone constraints (1) give a tighter alignment of surface normals with vectors forcing the segment boundary to closely follow the level-sets of the scribble’s distance map , see Fig.5. In the limit, this implies consistency of segment’s skeleton with the skeleton of the given scribble, outline, or template.
2.2 Single hedgehog via graph cuts
We show an approximation for hedgehog constraint (1) for object in the context of binary N-dimensional image segmentation via graph cuts . All cone constraints (1) for any given and distance map gradients , see Fig.3(a), correspond to a certain set of infinity cost directed edges, see Fig.3(b-d). For example, consider cone of allowed surface normals at some point illustrated in Fig.4 for two different values of parameter . It is easy to see that a surface/boundary of segment passing at has normal iff this surface does not cross the corresponding polar cone
This reformulation of our hedgehog constraint (1) is easy to approximate via graph cuts by setting infinity cost to all directed edges adjacent to whose directions agrees with polar cone , see Fig.4. To avoid clutter, the figure only shows such directed edges starting at , but one should also include similarly oriented directed edges pointing to . The set of all directed graph edges consistent with local polar cones orientations, see Fig.3(b-d), is
Obviously, hedgehog constraints are better approximated by larger neighborhood systems , e.g. 32-neighborhood works better than 8-neighborhood, see Fig.5(b,c).
The used vector field has a direct effect on the set of allowed shapes when varying . Figure 6 shows the segmentation result for varying for two different vector fields on the same synthetic example.
|(a) wide cone of normals||(b) tight cone of normals|
|(a) image and user scribble (blue)||(b) 8-neighborhood||(c) 32-neighborhood|
|(a) image and user scribble (blue)||(b) Euclidean Distance Transform gradient||(c) synthetic vector field|
3 Multi-hedgehog segmentation energy
Given a set of pixels , neighborhood system , and labels our multi-labeling segmentation energy is
where is a labeling.
The first two terms, namely data and smoothness
terms, are widely used in computer vision, e.g.[4, 2, 19]. The data term commonly referred to as the regional term as it measures how well pixels fit into their corresponding labels. To be specific, ) is the penalty for assigning label to pixel . Similar to , a label’s probabilistic model, a Gaussian Mixture in our case, is found by fitting a probabilistic model to the seeds given by the user.
The smoothness term is a standard pairwise regularizer that discourages segmentation discontinuities between neighboring pixels. A discontinuity occurs whenever two neighboring pixels are assigned to different labels. In its simplest form, where is Iverson bracket and is a non-increasing function of the intensities at and . Also, is a parameter that weights the importance of the smoothness term.
Third term, our contribution, is the Hedgehog term
where . Those familiar with graph cuts may prefer to think of it as an -cost arc from to , thus prohibiting any cut that satisfy and .
The Hedgehog term is the sum of the Hedgehog constraints over all the labels and it guarantees that any feasible labeling111We use feasible (and not bound) because there is at least one trivial solution with finite cost. In practice, it is practical to assume that one of the labels, e.g. background label, does not require enforcing shape constraints otherwise the problem could become over-constrained. One trivial solution is to label all pixels as background except those labeled by user scribbles., i.e. , will result in a segmentation with surface normals respecting the orientation constraints (1). Notice that (6) reduces to  when , and the shape constraints are defined for only one of the labels by a single pixel.
3.1 Expansion Moves
In this section we will describe how to extend the binary expansion moves of -exp  to respect the shape constraints, and show that these moves are submodular. The main idea of -exp algorithm is to maintain a current feasible labeling , i.e. , and iteratively move to a better labeling until no improvements could be made. To be specific at each iteration, a label is chosen and variables for all are given a binary choice ; 0 to retain their old label or 1 switch to , i.e. .
The Hedgehog term (6) for a binary -exp move could be written as
The first term in (7) guarantees that the resulting labeling respects label hedgehog constraints. In addition, the second term guarantees that the hedgehog constraints satisfied by the current labeling for all labels in are not violated by the new labeling .
According to , any first-order binary function could be exactly optimized if all pairwise terms are submodular. A binary function of two variables is submodular if . Our energy (7) is submodular as it could be written as the sum of submodular pairwise binary energies over all possible pairs of and . Notice that for any given pair, by construction and as long as the current labeling is a feasible one, i.e. it does not cut any of the -cost arcs. Also, and are both by construction. Therefore, the submodularity condition is satisfied for all pairs of and .
|(a) initial Seeds||(b) current labeling|
|(c) (7) first term constraints||(d) (7) second term constraints|
|(e) feasible expansion move||(f) infeasible expansion move|
Fig.7 shows an example of an -exp move over the green label. We assume the shape constraints only for the green and purple labels. Fig.7(a) shows the initial seeds for three different labels while (b) shows the current feasible labeling. Fig.7(c-d) show the shape constraints enforced by green and purple labels while expanding the green label. Note, the green shape constraints are enforced all over the image while the purple shape constraints are enforced inside its current labeling support area, as it is not necessarily to enforce it everywhere. Fig.7(e) shows a feasible move that respects the green and purple shape constraints while (f) shows an infeasible move that respects only the green shape constraints.
4 Relation to multi-surface graph cuts
Our work could be related to multi-object segmentation methods [16, 7] combining various forms of boundary regularization and interactions between the surfaces. In particular, Logismos  computes nested segments using polar grid layers (one per segment) as in Fig.8(b). In general, edges between the layers enforce inter-surface constraints like minimum and maximum distances between the surfaces along each ray. For these constraints to work, the polar grids should be the same at all layers. Edges within each polar grid enforce regularity of the corresponding segment. Figure 8(b) details the construction. Red edges penalize inter-ray surface jumps222In polar representation, let each segment correspond to a labeling assigning distance form the pole to the segment boundary along ray . Inter-ray smoothness corresponds to any convex pairwise potential as in Ishikawa . Similarly, inter-layer edges enforcing some min and max distances along each ray are a special case of convex potential . and infinity cost green edges enforce a shape prior analogous to star convexity .
|Cartesian discretization approach||polar discretization approach|
|(a) Two identical hedgehogs with inclusion constraint ||(b) Two identical nested stars as in Logismos |
|(c) Two distinct hedgehogs with inclusion constraint ||(d) Extended multi-polar Logismos with distinct stars|
If considering only one segment, our hedgehog shape prior is closely related to both Logismos and star convexity. The use of Cartezian grid makes our approach closer to methods [14, 25] already discussed in Sec.2. Our prior is defined by a vector field, see Fig.3(a), instead of a polar system of non-overlapping rays  requiring considerable care during construction. Each vector at any of our grid pixels defines a cone of allowed surface normals, see Fig.4, controlled by width parameter . In particular, tighter cones enforce skeleton consistency. While our dual cone of infinity cost edges resembles a combination of green and red edges in each polar layer of Logismos, our geometrically motivated Cartesian approach uses simpler vector fields generalizing non-overlapping rays and does not require highly non-uniform polar resampling of images. In fact, our graph construction is technically different from Logismos, as evident from our discretization details presented in the Appendix.
Also, there are more substantial differences between our multi-hedgehogs method and Logismos. The latter enforces one star model for all nested shapes since it uses the same polar grids. In contrast, we do not require nested segments and allow independent shape priors at each segment. Our current approach does not enforce any geometric inter-segment distances. Thus, it can be seen as an augmentation of the standard Potts model with independent shape priors for each segment. However, the following two subsections discuss certain extensions of our multi-hedgehog approach and Logismos that make them more comparable.
4.1 Hedgehogs with inter-segment constraints
If additional geometric inter-segment constraints are needed, our hedgehog shapes could be easily integrated with the isotropic Cartesian formulations for the inclusion, minimum margin, exclusion  and Hausdorf distance . For example, Fig.8(a) illustrates a layered graph construction enforcing zero-margin inclusion for two segments  (brown edges) combined with the same hedgehog shape prior (black edges, as in Fig.4) defined by identical vector fields (blue) at two layers. It is also easy to switch to distinct shape priors for each segment by using different vector fields, see Fig.8(c).
Interestingly, replacing inclusion by non-submodular exclusion constraint between the layers  makes the corresponding model conceptually close to our Potts approach with multi-hedgehog priors. Thus, our multi-label optimization by -expansion on a single-layer graph in Sec.3.1 can be seen as an alternative to QPBO , TRWS , or other standard approximate optimization methods  applicable to binary non-submodular multi-layered graphical model in . For significant memory savings and potential speed gains, it is possible to reformulate geometric inter-segment constraints in  as multi-label segmentation potentials that can be addressed with efficient approximate algorithms on one image-grid layer, e.g. -expansion, message passing, or other methods.
4.2 Multi-polar Logismos with distinct shape priors
It is interesting to consider an option of different polar grids at each layer of Logismos as in Fig.8(d) that makes it comparable to multi-object segmentation  with distinct hedgehog shape priors (c) discussed in the previous subsection. While such multi-polar extension of Logismos can provide distinct shape priors for the segments, it raises questions about the inter-layer interactions and their geometric interpretation. First, there is a minor problem of misalignment between the polar grid nodes. However, a bigger problem is the misalignment between the rays that calls for a revision of the nestedness and along-the-ray distance constraints between the surfaces. If no nestedness is needed, than it is necessary to add non-submodular consistency constraints between the layers, i.e. exclusion . If nestedness is still required, then simple inter-surface distance constraints in  are possible, but the minimum distance would be enforced along the rays of the smaller segment layer and the maximum distance would be along the rays of the larger segment. This discrepancy may be acceptable if the polar systems and the corresponding shape priors are close, but larger shape differences call for more isotropic definitions of inter-shape distances [7, 21] that are independent of polar discretization.
|(a) initial seeds||(b) Potts model|
|(c) Potts model||(d) Hedgehogs + Potts|
|(a) initial seeds||(b) Potts model|
|(c) Multi-Star + Potts model||(d) Hedgehogs + Potts model|
In the following set of experiments we show the benefit of incorporating our Hedgehogs term (6) to the well studied Potts model segmentation energy, i.e. data term + smoothness term, for multi-object segmentation in 2D and 3D. We will also give an illustrative real life example to show that the hedgehog shape is more general than star-shape . The results shown in this section for our method were generated using when computing the hedgehog shape constraints, also we did not enforce any shape constraints on the background model. Also, the same smoothness weight is used when comparing methods unless stated otherwise.
Our optimization framework is similar to  where the user marks a set of initial seeds in the form of a scribble for the required labels, e.g. left kidney, right kidney etc. The seeds for each label were used to fit an initial Gaussian Mixture color model, and to generate its hedgehog shape constraints. Similarly to [11, 8], we iteratively optimize our energy (5) (or Potts model) in an EM-style iterative fashion. We alternate between finding a better segmentation and re-estimating the color models using the current segmentation. The framework terminates when it can not decrease the energy anymore.
For the example shown in Fig.9(a), (b-c) show Potts model results for and 6, respectively. It should be noted that is the smallest smoothness weight that did not result in over-segmentation when using Potts. However, the result in Fig.9(c) is biased towards smaller objects (notice star tips) because by increasing the smoothness weight we are also increasing the shrinking bias. Over-segmented results as the one in Fig.9(b) could be avoided without increasing the shrinking bias, simply by incorporating multi-shape priors. Our method which incorporates Hedgehogs shape priors with Potts model was able to find a better segmentation, see Fig.9(d).
The objective of the example shown in Fig.10(a) is to segment left and right lungs, and the background. Potts model result shown in Fig.10(b) has holes, i.e. part of the background appears in the middle of the lungs. Furthermore, Potts model converged to biased color models where the right lung preferred brighter colors while the left preferred darker colors. Similar to the previous example, increasing for Potts model will increase the shrinking bias and it becomes hard to segment the elongated part of the the right lung. Using multi-star which is a generalization of  to multi-object segmentation is not enough because the right lung is not a star-shape. To be specific, there is no point inside the right lung that could act as a center of a star-shape that would include it. Fig.10(d) shows the result for our method, where user scribbles were used to enforce shape constraints compared to using a single pixel per label .
We applied our method on PET-CT scans of three different subjects to segment their liver, left kidney, right kidney and the background. Although we applied our method and Potts model on the 3D volumes we only show the results on a few representative slices from each volume in Fig.11. Also, the results of different methods for each subject were computed using the same smoothness. We can see from the last two rows which compare our method to Potts, using Hedgehogs constraints enabled us to avoid geometrically incorrect segmentations, e.g. one liver inside the other (last-row middle), or parts of left kidney is between the right kidney and liver (last-row right). Furthermore, for test subjects 1 and 2 the kidneys and background were poorly segmented by Potts model, e.g. most of the kidneys were segmented as background for test subject 1. Potts poor performance is due to the large overlap between the kidneys and background color models. This overlap resulted in an in-discriminative data term for Potts to properly separate them. This issue becomes worse in iterative frameworks where color models are re-estimated based on current segmentation. To be specific, if at any iteration Potts model resulted in a bad segmentation then re-estimating the color models will bias them towards the bad segmentation and subsequent iterations worsen the results. Comparing our results for subjects 1 and 2 to Potts model shows that our method is less prone to the aforementioned issue as we forbid undesirable segmentations, i.e. those that do not respect shape constraints.
|Subject 1||Subject 2||Subject 3|
Our method (Hedgehogs Shapes + Potts)
For quantitative comparison, Table 1 lists for each organ of a subject the Score, Precession and Recall measures of our method and Potts model where For the kidneys, our method clearly out performed Potts model, e.g. note Potts model poor precision/recall for subjects 1 and 2. For the liver, both methods performed comparably.
|Subject 1||Subject 2||Subject 3|
score, precision and recall measures for each method, individual organ and subject—the closer these values are to 1 the more accurate the segmentation is. For the kidneys where most of the color model overlap occurs, our method was a clear winner. For the liver which has a bigger volume and a more distinct color model compared to the kidneys/background, the two methods performed comparably.
We proposed a novel interactive multi-object segmentation method where objects are restricted by hedgehog shapes. The hedgehog shape constraints of an object limits its set of possible segmentations by restricting the segmentation’s allowed surface normal orientations. Hedgehog shape constraints could be derived from some vector field, e.g. the gradient of a user scribble distance transform. In addition, we showed how to modify -expansion moves to optimize our multi-labeling problem with hedgehog constraints. We also proved submodularity of the modified binary expansion moves. Furthermore, we applied our multi-labeling segmentation with hedgehog shapes on 2D images and 3D medical volumes. Our experiments show the significant improvement in segmentation accuracy when using our method over Potts model. Specially in medical data where our method outperformed Potts model in separating multiple organs with similar appearances and weak edges.
Appendix: Discretization Issues
There are some challenges/drawbacks due to the discretization of hedgehog constraints. For example, the number of representable surface orientations depends on the chosen neighborhood system which could be remedied by using larger neighborhood systems. Also it is possible for a polar cone to be under represented by if it happens that no edges lie in it which could result in a segmentation surface with folds. Furthermore, in cases where the vector field changes relatively fast w.r.t. the image resolution it is possible for neighboring pixel’s hedgehog constraints to conflict.
|(a) Empty Cone||(b) Under-represented cone|
a pixel’s polar cone is under represented by in two case: (a) “empty cone ” when there are no neighbor edges consistent with the polar cone as shown in Fig. 12(a), and (b) when there is a large part of the cone unaccounted for, see Fig. 12(b) where the big cone is accounted for by only one edge. Based on our practical experience only ignoring the former case is of significant consequences while ignoring later does not adversary affect the results.
Empty cone issue could be alleviated by increasing the neighborhood size. However, this is not practical because for the neighborhood edge has to perfectly align with the surface normal, see Fig. 12(c). Alternatively, we propose adding to the nearest neighborhood edge to the empty cone as shown in Fig.12(d).
|(a) over-constrained||(b) with higher resolution||(c) after edge pruning|
were eliminated when they were inconsistent with the interpolated vector field (shown in light blue).
|(a) vector field with rapid changes||(b)||(c) after pruning|
Fast changing vector field:
hedgehog prior (3) enforces the shape constraints at every pixel independently. When the vector field orientation changes rapidly between neighboring pixels the resulting shape constraints could become contradictory leading to over-constraining. As can be seen in Fig. 13(a) the contradictory shape constraints resulted in a construction where no surface could path between the four neighboring pixels, i.e. all of them will either be labeled foreground or background.
One possible way to overcome the fast changing vector fields is to increase the image resolution via sub-sampling. As you can see in Fig. 13(b) doubling the resolution alleviated the aforementioned issue. However, there is no simple answer to at which resolution there will be no over-constraining, as it will depend on , and the vector field. Also, increasing the image resolution is not a practical solution as it adversely affects the running time.
Alternatively one can try an resolve contradicting constraints by pruning . In this case, we interpolate the vector field’s orientation for every neighboring pixels and eliminate their edge constraint(s) if it were not consistent with the interpolated orientation, as shown in Fig. 13(c). Fig14 shows a synthetic example of fast chancing vector field orientations and how the edge constraints pruning alleviates the over-constraining problem.
-  S. Andrews, C. McIntosh, and G. Hamarneh. Convex multi-region probabilistic segmentation with shape prior in the isometric log-ratio transformation space. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2096–2103. IEEE, 2011.
-  Y. Boykov and M.-P. Jolly. Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images. In ICCV, volume I, pages 105–112, July 2001.
-  Y. Boykov and V. Kolmogorov. Computing geodesics and minimal surfaces via graph cuts. In International Conference on Computer Vision, volume I, pages 26–33, 2003.
-  Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. In International Conference on Computer Vision, volume I, pages 377–384, 1999.
-  Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE transactions on Pattern Analysis and Machine Intelligence, 23(11):1222–1239, November 2001.
D. Cremers, F. R. Schmidt, and F. Barthel.
Shape priors in variational image segmentation: Convexity, lipschitz
continuity and globally optimal solutions.
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–6. IEEE, 2008.
-  A. Delong and Y. Boykov. Globally Optimal Segmentation of Multi-Region Objects. In International Conference on Computer Vision (ICCV), 2009.
-  A. Delong, A. Osokin, H. Isack, and Y. Boykov. Fast Approximate Energy Minization with Label Costs. International Journal of Computer Vision (IJCV), 96(1):1–27, January 2012.
-  P. F. Felzenszwalb and O. Veksler. Tiered scene labeling with dynamic programming. In Computer Vision and Pattern Recognition (CVPR), 2010.
-  V. Gulshan, C. Rother, A. Criminisi, A. Blake, and A. Zisserman. Geodesic star convexity for interactive image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2010.
-  H. N. Isack and Y. Boykov. Energy-based Geometric Multi-Model Fitting. International Journal of Computer Vision (IJCV), 97(2):123–147, April 2012.
-  H. Ishikawa. Exact optimization for Markov Random Fields with convex priors. IEEE transactions on Pattern Analysis and Machine Intelligence, 25(10):1333–1336, 2003.
-  J. Kappes, B. Andres, F. Hamprecht, C. Schnorr, S. Nowozin, D. Batra, S. Kim, B. Kausler, J. Lellmann, N. Komodakis, et al. A comparative study of modern inference techniques for discrete energy minimization problems. In Computer Vision and Pattern Recognition (CVPR), pages 1328–1335, 2013.
-  V. Kolmogorov and Y. Boykov. What metrics can be approximated by geo-cuts, or global optimization of length/area and flux. In International Conference on Computer Vision, October 2005.
-  V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts. In 7th European Conference on Computer Vision, volume III of LNCS 2352, pages 65–81, Copenhagen, Denmark, May 2002. Springer-Verlag.
-  K. Li, X. Wu, D. Z. Chen, and M. Sonka. Optimal surface segmentation in volumetric images-a graph-theoretic approach. IEEE transactions on Pattern Analysis and Pattern Recognition (PAMI), 28(1):119–134, January 2006.
-  S. M. Pizer, P. T. Fletcher, S. Joshi, A. Thall, J. Z. Chen, Y. Fridman, D. S. Fritsch, A. G. Gash, J. M. Glotzer, M. R. Jiroutek, et al. Deformable m-reps for 3d medical image segmentation. International Journal of Computer Vision, 55(2-3):85–106, 2003.
-  S. Pizer et al. Deformable M-Reps for 3D Medical Image Segmentation. International Journal of Computer Vision (IJCV), 55(2-3):85–106, November 2003.
-  C. Rother, V. Kolmogorov, and A. Blake. Grabcut - interactive foreground extraction using iterated graph cuts. In ACM transactions on Graphics (SIGGRAPH), August 2004.
-  C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary mrfs via extended roof duality. In Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2007.
-  F. Schmidt and Y. Boykov. Hausdorff distance constraint for multi-surface segmentation. In European Conference on Computer Vision (ECCV), LNCS 7572, volume 1, pages 598–611, Florence, Italy, October 2012.
-  T. Schoenemann and V. Kolmogorov. Generalized sequential tree-reweighted message passing. Advanced Structured Prediction, page 75, 2014.
-  K. Siddiqi and S. Pizer. Medial representations: mathematics, algorithms and applications, volume 37. Springer Science & Business Media, 2008.
-  K. Siddiqi and S. Pizer. Medial Representations: Mathematics, Algorithms and Applications. Springer, December 2008.
-  O. Veksler. Star shape prior for graph-cut image segmentation. In European Conference on Computer Vision (ECCV), 2008.
-  N. Vu and B. Manjunath. Shape prior segmentation of multiple objects with graph cuts. In Computer Vision and Pattern Recognition (CVPR), pages 1–8, 2008.