## 1 Introduction

Shape segmentation, which is decomposition of shapes into functional and visually meaningful parts, has many applications in areas such as computer vision, graphics, computational geometry and pattern recognition

[1, 2, 3]. One of the most popular geometric constraints in shape segmentation is convexity [4, 5]. Decomposition of shapes in to convex components is an active research topic that requires computation of shape convexity measure. Next, we briefly mention some of the related works in the literature on convex decomposition, shape convexity measure, and present the contributions of this paper.Convex Decomposition: Decomposition of shapes into exact convex pieces can generate unmanageable number of components due to noise and surface texture [6]. In this regard, several algorithms have been proposed in the literature for approximate convex decomposition (ACD) with a user specified tolerance for the approximation [7, 4, 5, 6, 8, 1, 9]. Lien and Amato in [7] presented one of the first significant work in the area of ACD, by recursively resolving the most concave features until the concavity of every component is below some user specified threshold. Recently, in [8] and [10] mutax pairs are used to create fewer and more natural nearly convex shapes. What is common to most of the ACD papers in the literature is that they depend on computation and use of some form of convexity measure for their decomposition.

Convexity Measure: Convexity is one of the most basic shape descriptor with many applications [11]. The two most common definitions of convexity of a shape are the region-based (RB) and the perimeter-based (PB) approaches. RB defines the convexity of a shape as the ratio of the shape’s area to the area of its convex hull; whereas, PB convexity is the ratio of the perimeter of the convex hall of the shape to the perimeter of the shape [11, 12]. However, the major problem of the RB convexity methods are their insensitivity to deep and thin protrusions, while the PB methods are intolerant to small boundary deformations [13]. Recent papers in the literature that addresses the above problems of the convexity measures of shapes can be found in [12, 11, 13, 14, 15].

Contributions: In this paper, we propose a novel convex decomposition and shape convexity measure based on Disjunctive Normal Shape Models (DNSM). The DNSM is an implicit and parametric shape model formed by disjunction of convex polytopes. The convex polytopes are formed by conjunctions of half-spaces (see Fig. 1). The DNSM is recently used in different image segmentation approaches [16, 17, 18, 19]. In the proposed convex decomposition, by deforming the polytopes using variational methods, the polytopes naturally remain convex during the evolution, and hence they capture convex parts without the need to compute convexity. Therefore, unlike most convex decomposition techniques available in the literature that depend on the expensive convexity measure computations on every iteration, the proposed decomposition method generates shape convexity measure as a by-product of its intermediate step. This paper has three major contributions: a robust convex decomposition method (Section 3), a novel shape convexity measure (Section 3.2), and an efficient shape representation (Section 3.3). By automatically decomposing shapes into their convex parts and representing each part with a convex polytope, we get a compact and geometrically more meaningful DNSM shape representation.

## 2 Disjunctive Normal Shape Model

In this section, we present the DNSM method which will be used in the next section for the proposed convex decomposition and convexity measures. In DNSM, conjunctions of half spaces form convex polytopes as shown in Fig. 1 (a). The disjunction of convex polytopes forms the DNSM shape model as shown in Fig. 1 (b) (each color represents a polytope).

Consider the characteristic function

where . Let . Let us approximate as the union of convex polytopes where the polytope is defined as the intersection of M half-spaces. is defined in terms of its indicator function(1) |

where and are the weights and the bias term, and is the dimension. Therefore, is approximated by and equivalently is approximated by the disjunctive normal form [20]. Converting the disjunctive normal form to a differentiable shape representation requires the following steps: First, De Morgan’s rules are used to replace the disjunction with negations and conjunctions, which yields . Since conjunctions of binary functions are equivalent to their product and negation is equivalent to subtraction from , can also be approximated as . Finally, we approximate

with logistic sigmoid functions

to get the differentiable approximation of the characteristic function(2) |

The only adaptive parameters are the weights and biases of the first layer of logistic sigmoid functions, , which define the orientations and positions of the linear discriminants that form the shape boundary. In equation (2) the level set is taken to represent the interface between the foreground and background regions.

## 3 Convex Decomposition

The proposed convex decomposition has three steps. First, a given shape is decomposed into many overlapping convex parts starting for a regularly distributed polytopes. Then, we present a novel local convexity measure to sort and remove less significant polytopes. We also show how the local convexity measure can be used as a shape convexity measure. The final convex decomposition is obtained by fitting the DNSM into the shape by using only the selected polytopes. Although we only show for 2D shapes, the proposed algorithm can also be applied to 3D shapes directly.

### 3.1 Decomposition into Overlapping Convex Parts

The goal here is to represent a given shape with overlapping convex polytopes using the DNSM model (2). We start with large number of polytopes, , as can be seen in Fig. 2(a). The initialization polytopes are approximated as discs (and spheres for 3D) of a fixed radius, and they are regularly distributed in the body of the shape. The DNSM discriminant parameters, , that represent a given shape can be obtained by choosing the weights that minimize the energy

(3) |

where represents the individual polytopes of . is the shape image with intensity value of 1 at the shape and 0 at the background, and is a constant. We minimize (3) using gradient descent to obtain which represents a given shape, and it will be presented shortly.

The first term in (3) fits the model to the shape by minimizing the mean square error between the level set value and the shape image intensity . The energy from the first term in (3) is minimum when inside the object shape (where the intensity value is ), and outside the object shape (where the intensity value of the ground truth is ). The second term in (3) maximizes the overlap between the different polytopes. Fig. 2(b) shows an example of the result obtained by applying equation (3) to a shape shown in Fig. 2(a), where the degree of brightness corresponds to the number of polytopes that overlap at that particular point.

The energy minimization implies computing the derivatives of equations of (3) with respect to each discriminant parameters, . The update to the discriminant weights, , is then obtained by minimizing the energy using gradient descent as

(4) |

where , and

, are obtained after a few steps in the taking of the partial derivatives. Therefore, during the evolution the discriminant parameters are updated on each iteration as , where is the step-size.

The idea behind maximizing the overlap among the polytopes is that we can easily identify and remove unnecessary polytopes for efficient shape representation and for approximate convex decomposition. For instance, an unnecessary polytope can be a polytope that do not have any unique region it represents. Therefore, removing of such a polytope do not affect the shape representation since the pixels that it represents are already covered by other polytopes.

### 3.2 Shape Convexity Measure

For approximate convex decomposition, some polytopes are removed based on the ranking of their significance, which is their relative convexity. When exact convex decomposition is needed, the convexity measure to be discussed in this section is not necessary; however, approximate convex decomposition is desirable since it is more robust to (minor) noisy surface deformations, and also results in a compact part-based shape representation. If necessary, the user can control the degree of the approximation by using the number of final polytopes remaining or by the relative convexity measure.

We define the significance measure of the polytope, , as

(5) |

where is the size of the region represented by the polytope, and is the size of the unique region represented by the polytope only. is the size of the largest polytope, and is a constant which is experimentally found to be around 0.25. For instance, in Fig. 2(c), the polytope labeled is the largest of all the shown polytopes, and is the size of polytope 2. The main idea behind equation (5) is that the significance of a given polytope depends both on the size of the unique region it represents and on its relative size compared to the largest convex part in the shape.

To show how the significance measure of equation (5) is also a local convexity measure, let us look at Fig. 3. The green and black parts are two different polytopes, and the overlap region of the two polytopes is shown in red. In Fig. 3(a) polytope 2 (in green) represents a highly convex local region, and hence it has a large unique region compared to its size, making its value large. On the other hand, in Fig. 3(b), polytope 2 has a small value. Therefore, by using deformable polytopes, the true convexity of a local region represented by a given polytope depends on the relative size of the unique region the polytope covers. Note that, both the region-based and the perimeter-based convexity measures defined in Section (1) can not show the large convexity difference between the two shapes in Fig. 3. Therefore, based on the local convexity definition of equation (5), polytopes that have relatively small value are removed from further consideration during the approximate convex decomposition. It should be noted that once a given polytope is removed, the unique size of all the remaining polytopes should be recomputed, since the removal of a polytope can affect the unique region sizes of the remaining polytopes.

Equation (5) can be used to measure the (global) concavity of a shape. The global shape concavity is the sum of the local convexity of all the polytopes except the largest polytope; that is, . Therefore, the proposed global shape concavity depends on the number of convex components the shape has and the relative size of the unique region each polytope represents compared to the largest convex component of the shape. Notice that, the global shape concavity measure is not necessary for the proposed approximate convex decomposition (which only requires local convexity measure); however, convexity of a shape has many applications of its own [12, 11, 13].

### 3.3 Final Decomposition and Efficient DNSM

The final step in the (approximate) convex decomposition is to represent the shape with DNSM using the few selected polytopes obtained from the significance measure. The shape representation using the DNSM should avoid the overlapping of the polytopes and creation of gaps. For instance, in Fig. 2(c) we can see that removing of many of the ’unnecessary’ polytopes has resulted in creation of small gaps, while also some of the polytopes overlap. The energy term that can be minimized, in order to fill the gaps and remove the overlaps using the final small number of selected polytopes, can be given as

(6) |

which is similar to equation (3) except for the plus sign in front of the second term. That is, in equation (6) we penalize the overlap of the polytopes in order to represent each part with a unique convex polytope. The first term in equation (6) helps to fill the gaps and fits the DNSM model to the shape as discussed previously for (3). Figure 2(d) shows the final convex decomposed result obtained by applying equation (6) to the result in Fig. 2(c). Therefore, by decomposing and representing each convex part of a given shape with a unique polytope we achieve a very compact and geometrically meaningful shape representation. For instance, in Fig. 2(d) only polytopes (each of which has discriminants) are needed to represent the shape with great accuracy, resulting in large compression rate. In addition, by storing the connectivity graph of the polytopes (that is, which poltope is connected to which), we can construct a graphical DNSM shape representation that can facilitate further shape analysis algorithms such as shape matching and recognition.

## 4 Results

In this section, we present the experimental results for the proposed convex decomposition and the shape convexity measure.

Decomposition Results: Figure 4 shows the shape decomposition examples using the proposed method, and compares them with the result using Lien et al. [7]. As can be seen from the figure, the results using the proposed method (Fig. 4 second row) shows convex decomposition that is closer to human expectation compared to results using [7]. Our algorithm also decomposes the shapes in to smaller number of parts compared to [7], which is essential for robust shape representation and can improve the efficiency of further processes.

Figure 5 gives additional convex decomposition examples for shapes from MPEG-7 dataset [21] and a walking person, using the proposed method.

Convexity Results: We compare the concavity measure proposed in this paper with the two commonly used concavity measures in the literature: the PB and the RB concavities as defined in Section (1). Figure 4 gives a comparison of the three concavity measures using shapes from MPEG-7 dataset [21]. Looking at the shapes in the figure, one can easy see that the apple is the least concave (highly convex) followed by the birds, and then the camels, and finally the star device. However, the PB and RB concavity measures made mistakes both in ranking the different object classes and the positions of objects of the same class (for instance, the two birds) that have very small concavity difference when observed by humans. The proposed DNSM-based concavity measure and its ranking corresponds more closely to what is expected.

## 5 Conclusion

In this paper, we presented a novel convex decomposition by using deformable convex polytopes, which naturally maintain their convexity during deformation. This is conceptually different from the techniques commonly available in the literature which require the computation of convexity on every iteration, which can be expensive and may not be reliable. The proposed decomposition method generates a shape convexity measure, which corresponds well to the convexity observed by humans, as a by-product of its intermediate step. The approximate convex decomposition helps to form a compact and efficient DNSM, where each convex part is represented by a single polytope. In the future, we plan to extend the method presented in this paper to shape matching and recognition.

## References

- [1] Guilin Liu, Zhonghua Xi, and Jyh-Ming Lien, “Dual-space decomposition of 2d complex shapes,” in Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, June 2014, pp. 4154–4161.
- [2] Ariel Shamir, “A survey on mesh segmentation techniques,” Computer Graphics Forum, vol. 27, no. 6, pp. 1539–1556, 2008.
- [3] Oliver Van Kaick, Noa Fish, Yanir Kleiman, Shmuel Asafi, and Daniel Cohen-OR, “Shape segmentation by approximate convexity analysis,” ACM Trans. Graph., vol. 34, no. 1, pp. 4:1–4:11, Dec. 2014.
- [4] Zhou Ren, Junsong Yuan, Chunyuan Li, and Wenyu Liu, “Minimum near-convex decomposition for robust shape representation,” in Computer Vision (ICCV), 2011 IEEE International Conference on, Nov 2011, pp. 303–310.
- [5] Jyh-Ming Lien and Nancy M. Amato, “Approximate convex decomposition of polyhedra and its applications,” Computer Aided Geometric Design, vol. 25, no. 7, pp. 503 – 522, 2008, Solid and Physical ModelingSelected papers from the Solid and Physical Modeling and Applications Symposium 2007 (SPM 2007)Solid and Physical Modeling and Applications Symposium 2007.
- [6] Mukulika Ghosh, Nancy M. Amato, Yanyan Lu, and Jyh-Ming Lien, “Fast approximate convex decomposition using relative concavity,” Computer-Aided Design, vol. 45, no. 2, pp. 494 – 504, 2013, Solid and Physical Modeling 2012.
- [7] Jyh-Ming Lien and Nancy M. Amato, “Approximate convex decomposition of polygons,” Computational Geometry, vol. 35, no. 1–2, pp. 100 – 123, 2006, Special Issue on the 20th {ACM} Symposium on Computational Geometry20th {ACM} Symposium on Computational Geometry.
- [8] Hairong Liu, Wenyu Liu, and L.J. Latecki, “Convex shape decomposition,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, June 2010, pp. 97–104.
- [9] K. Mamou and F. Ghorbel, “A simple and efficient approach for 3d mesh approximate convex decomposition,” in Image Processing (ICIP), 2009 16th IEEE International Conference on, Nov 2009, pp. 3501–3504.
- [10] Zhou Ren, Junsong Yuan, and Wenyu Liu, “Minimum near-convex shape decomposition,” IEEE Trans. on on Pattern Analysis and Machine Intelligence, vol. 35, pp. 2546–2552, 2013.
- [11] Zhouhui Lian, A. Godil, P.L. Rosin, and Xianfang Sun, “A new convexity measurement for 3d meshes,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, June 2012, pp. 119–126.
- [12] J. Zunic and P.L. Rosin, “A new convexity measure for polygons,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 26, no. 7, pp. 923–934, July 2004.
- [13] Raghuraman Gopalan, Pavan Turaga, and Rama Chellappa, “Articulation-invariant representation of non-planar shapes,” in Proceedings of the 11th European Conference on Computer Vision Conference on Computer Vision: Part III, Berlin, Heidelberg, 2010, ECCV’10, pp. 286–299, Springer-Verlag.
- [14] E. Rahtu, M. Salo, and J. Heikkila, “A new convexity measure based on a probabilistic interpretation of images,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 28, no. 9, pp. 1501–1512, Sept 2006.
- [15] Paul L. Rosin and Christine L. Mumford, “A symmetric convexity measure,” Computer Vision and Image Understanding, vol. 103, no. 2, pp. 101 – 111, 2006.
- [16] F. Mesadi, M. Cetin, and T. Tasdizen, “Disjunctive normal shape and appearance priors with applications to image segmentation,” in Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015, Nassir Navab, Joachim Hornegger, WilliamM. Wells, and AlejandroF. Frangi, Eds., vol. 9351 of Lecture Notes in Computer Science, pp. 703–710. Springer International Publishing, 2015.
- [17] M. Ramesh, F. Mesadi, M. Cetin, and T. Tasdizen, “Disjunctive normal shape model,” in ISBI, IEEE International Symposium on Biomedical Imaging, 2015.
- [18] F. Mesadi and T. Tasdizen, “Disjunctive normal level set: An efficient parametric implicit method,” in 2016 IEEE International Conference on Image Processing (ICIP), Oct 2016.
- [19] M. Ghani, F. Mesadi, S. Kanık, Argunsah A., Israely I., M. Cetin, and T. Tasdizen, “Dendritic spine shape analysis using disjunctive normal shape model,” in ISBI, IEEE International Symposium on Biomedical Imaging, 2016.
- [20] M. Hazewinkel, Encyclopaedia of Mathematics: An Updated and Annotated Translation of the Soviet ”Mathematical Encyclopaedia, Number v. 1 in Encyclopaedia of Mathematics. Springer, 1997.
- [21] L.J. Latecki, R. Lakamper, and T. Eckhardt, “Shape descriptors for non-rigid shapes with a single closed contour,” in Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, 2000, vol. 1, pp. 424–429 vol.1.