Aggregated Pairwise Classification of Statistical Shapes

01/22/2019 ∙ by Min Ho Cho, et al. ∙ 0

The classification of shapes is of great interest in diverse areas ranging from medical imaging to computer vision and beyond. While many statistical frameworks have been developed for the classification problem, most are strongly tied to early formulations of the problem - with an object to be classified described as a vector in a relatively low-dimensional Euclidean space. Statistical shape data have two main properties that suggest a need for a novel approach: (i) shapes are inherently infinite dimensional with strong dependence among the positions of nearby points, and (ii) shape space is not Euclidean, but is fundamentally curved. To accommodate these features of the data, we work with the square-root velocity function of the curves to provide a useful formal description of the shape, pass to tangent spaces of the manifold of shapes at different projection points which effectively separate shapes for pairwise classification in the training data, and use principal components within these tangent spaces to reduce dimensionality. We illustrate the impact of the projection point and choice of subspace on the misclassification rate with a novel method of combining pairwise classifiers.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Classification of shapes is a fundamental task in many application areas where the primary data object is an image. For example, in medical imaging, radiologists and doctors are often interested in classifying patients to different disease types based on shapes of anatomical structures. Consequently, the statistical analysis of shape data, and in particular the shape classification task, are of great interest to the research community. Our focus in this paper is on the multiclass shape classification problem, which presents some unique challenges. To elucidate the main difficulties, we begin by explaining what we mean by “shape data.”

The literature on shape analysis has considered many different mathematical representations of shape, including finite point sets or landmarks (Dryden and Mardia, 2016, 1992; Cootes et al., 1995), level sets (Malladi et al., 1996), skeletal models (Pizer et al., 2013), and diffeomorphic transforms or deformable templates (Grenander and Miller, 1998; Glaunès et al., 2008), among others. Stretching the definition of a landmark, consider a set of 2D images of leaves, each marked with a dense set of landmarks. The landmarks provide the outline of a leaf, and with them its shape. If the image is shifted, rescaled, or rotated, the shape remains unchanged; other transformations change the shape. Kendall (1984) recognized these invariances and defined shape as the geometric information in the set of landmarks that remains when translation, scaling and rotation have been filtered out.

In many applications, such as the leaf example that we return to in Section 5, it seems most natural to study the shape of an object via its entire outline rather than through a finite set of landmarks. In the 2D setting which we focus on, the shape is a planar, closed curve. The functional representation of such a curve replaces the set of landmarks with an alternative description. The curve is parameterized by a starting point on the shape and a mapping from the unit interval that describes the traversal of the shape, ending the journey at the starting point. Early versions of the functional representation relied on an arc-length parameterization for the traversal (Zahn and Roskies, 1972; Klassen et al., 2004). However, later papers showed that the arc-length parameterization was too rigid (Srivastava et al., 2011; Kurtek et al., 2012; Srivastava and Klassen, 2016), and that statistical analysis of shapes benefits from the more flexible elastic deformations. These elastic parameterizations rely on registration to determine the optimal point-to-point correspondences across objects (we describe this process formally later).

In this work, we adopt the popular square-root velocity function (SRVF) representation for elastic shape analysis of planar, closed curves (Joshi et al., 2007; Srivastava et al., 2011). There are two main advantages associated with this framework: (1) the SRVF simplifies a specific instance of an elastic metric (Mio et al., 2007; Younes, 1998) to the simple metric, facilitating efficient computation, and (2) the SRVF shape space is a quotient space of the unit Hilbert sphere for which many geometric quantities of interest have analytic expressions. These two ingredients allow parameterization-invariant (in addition to the other standard shape preserving transformations) comparisons and statistical models of shape. We exploit this representation for the model-based shape classification task. We provide more mathematical details on the SRVF and the formal definition of the associated shape space in Section 2.2.

1.1 Motivation

The curved nature of the shape space prevents us from directly using standard techniques for classification that are strongly tied to Euclidean geometry. The normal distribution, for example, is defined in

, with extension to spaces of infinite dimension provided by the Gaussian process. Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA), two popular model-based classification techniques which we consider in this work, are described in various fashions, but are intrinsically tied to the normal distribution, and hence to Euclidean spaces. While alternative classification approaches exist, usually based on shape distances and nearest neighbor-type classification rules (Kurtek et al., 2012; Laga et al., 2012), we argue that a model-based approach provides more flexibility in the definition of multiclass classification procedures. In particular, the ability to compute likelihoods for different shape classes allows us to aggregate multiple pairwise classifiers in a principled manner.

One standard approach to classification of shapes is based on “linearization” of the shape space (Pal et al., 2017; Srivastava et al., 2011). This is done by choosing a particular point in the shape space, usually given by the overall sample mean, identifying the (linear) tangent space at this point, and projecting the shapes into the tangent space via the inverse-exponential map. The role of the inverse-exponential map is illustrated in Figure 4. Once the shapes are projected into the tangent space, one can apply standard classification techniques. A major drawback of such an approach is that a single tangent space is generally chosen for the pairwise and multiclass problems. In the multiclass case, if one or more populations are very far from the others, projecting all shapes into a single tangent space at the overall mean introduces significant distortion into the tangent space shape coordinates. In turn, this has a negative effect on any subsequent statistical task, e.g., classification.

A second major challenge in shape classification is the high dimensionality of the shape space. Theoretically, the shape space and corresponding tangent space are infinite-dimensional since we use a functional representation of shape. In practice, the outlines are represented using a fine discretization, typically in the order of hundreds to thousands of points for an individual shape. This discretization leads to a tangent space of large, but finite dimension. The large dimension necessitates modification of LDA and QDA through dimension reduction or regularization. We pursue dimension reduction via a standard form of tangent Principal Component Analysis (tPCA), and modify the LDA and QDA classification procedures accordingly.

Figure 1: A general outline of the multiclass shape classification problem.

Figure 1 summarizes the multiclass shape classification problem. We are given labeled training data in the form of curves. The first task is to extract the shape from these curves. This requires within class registration as mentioned earlier. The second task is to linearize the shape space to a Euclidean space, reduce the dimension, and then fit a classwise statistical model. The models we consider are based on LDA and QDA. Finally, to classify a new unlabeled object, one has to register its outline to each class and make an assignment based on a classification rule; we consider likelihood-based classification.

1.2 Contributions

Our investigations show the need to modify the standard linearization approach to classification of shapes. The main reason for this is the arbitrariness in the choice of the tangent space where the full procedure is carried out. While the sample mean leads to a small distortion in inter-point distances after projection into the tangent space (see Section 7.4.1 in Srivastava and Klassen (2016)), this does not guarantee improved classification performance for the multiclass problem. In other words, since the lower-dimensional Euclidean coordinates of the shapes (after tPCA dimension reduction) are intimately tied to the tangent space used to define them, the chosen point of projection can have a major impact on classification performance. This is especially true when the training data has large dispersion, making a single tangent space approximation unsuitable for statistical analysis. In view of this, we list our main contributions.

  • For multiclass shape classification, we provide a heuristic that suggests different projection points for different pairwise problems. We develop a method to combine the pairwise results, and compare its performance to a classifier based on a single projection point. The new procedure has substantially better performance than the currently-used single projection methods.

  • For aggregating the pairwise problems, we first propose a one shot method that chooses the class with the highest likelihood (based on the LDA or QDA models). Additionally, we define a recursive approach that drops the class with the lowest likelihood, and then recomputes all relevant quantities without this class present in the data. This procedure is especially effective when there are several classes that differ greatly from most groups in the data.

  • Finally, we suggest an intermediate method that is also based on aggregation of pairwise problems but only uses a single tangent space. This alternative procedure performs better than the one shot method in a single tangent space. As expected, it does not perform as well as the new pairwise procedure described above. The main motivation for this intermediate approach comes from the large computational cost of working with all pairwise tangent spaces (this requires the computation of all pairwise means), when the number of classes is large.

The rest of this paper is organized as follows. Section 2 briefly reviews the SRVF framework for elastic shape analysis of planar curves and defines tools for computing all relevant statistics. Section 3 begins by describing the currently used one shot classification approach in a single tangent space. It then defines the three novel procedures, which rely on pairwise statistics and dimension reduction to different degrees. Section 4 provides a simple simulation study that motivates the use of pairwise procedures in classification. Section 5 includes detailed empirical studies of a popular plant leaf dataset as well as an animal dataset with very diverse shapes. Section 6 provides a short discussion and lays out some directions for future work.

2 Elastic Shape Analysis Preliminaries

We review the elastic shape analysis framework, based on the square-root velocity function representation, proposed by Srivastava et al. (2011). For brevity, we do not provide all details of this framework, and refer the interested reader to Srivastava and Klassen (2016).

2.1 Shape Preserving Transformations

Let represent a parameterized, planar curve. For an open curve , while for a closed curve . We restrict our analysis to the set of absolutely continuous curves. While our focus is on classification of planar shapes, these methods are also applicable to higher-dimensional curves. Under our notation, where are scalar-valued coordinate functions, and is a parameter that traces the path , in , from the start point of the curve to the end point of the curve. In the case of a closed curve, the start and end points are exactly the same.

Figure 2: Illustration of shape preserving transformations. The same leaf shape with different translations, rotations and scales (left), and different parameterizations (right).

As mentioned in the introduction, shape is invariant to translation, scaling, rotation and re-parameterization. The translation of a curve is given by (this addition is carried out for each parameter value ), where . Invariance to this transformation is easiest to achieve, as it will be imposed directly in the representation. The rescaling of a curve is given by , where . Invariance to scale can be achieved via normalization: we will consider unit length curves only. The rotation of a curve is given by (again, this matrix multiplication is applied for each parameter value ); here, where is called the special orthogonal group. Finally, the re-parameterization of a curve is given by composition , where and is the set of orientation preserving diffeomorphisms of . In constrast to translation and scaling, rotation and re-parameterization are not easy to filter out of the representation.

Figure 2 illustrates all of the shape preserving transformations. The left panel shows the effect of translation, rotation and rescaling. The right panel shows the effect of re-parameterization. In particular, we display the sampling of points on the curve imposed by the different parameterizations using colors: the start point is given in black and the parameterization traces the curves from purple to yellow. The filled-in points correspond to the same parameter values across the three curves.

2.2 Square-Root Velocity Function Shape Space and Distance

Comparison of shapes of different objects is fundamental to shape analysis. This task, as well as subsequent statistical tasks, requires the definition of a distance between shapes. The norm given by , where denotes the Euclidean norm in , seems a natural choice. However, this distance is not parameterization invariant because is not guaranteed to equal (Srivastava et al., 2011). This suggests the need for a different distance on shapes.

Mio et al. (2007) defined a family of elastic Riemannian metrics that is invariant to all of the aforementioned shape preserving transformations, including re-parameterization. These metrics are called elastic because they provide a natural interpretation of shape deformations in terms of their bending and stretching/compression. However, despite these nice mathematical properties, their practical use in shape analysis was limited due to computational difficulties until Joshi et al. (2007) and Srivastava et al. (2011) introduced the square-root velocity function (SRVF)


where is the derivative of with respect to . The SRVF simplifies the elastic metric to the metric, thereby facilitating easy computation (Srivastava et al., 2011). If is absolutely continuous, then its SRVF is square-integrable, i.e., an element of , henceforth referred to simply as (Robinson, 2012). Further, one can uniquely recover the curve from its SRVF , up to a translation, via the relation , where is the start point of the parameterization. For the remainder of this paper, we focus on shape analysis of curves facilitated by the SRVF transform.

The translation of a curve is automatically filtered out under the SRVF representation, as it is based on the derivative of the curve. Restricting the set of all absolutely continuous curves to those that have unit length results in a unit norm constraint on the associated SRVFs: . Thus, we define the pre-shape space as , the unit sphere in . We call this the pre-shape space as we have not yet filtered out rotations and re-parameterizations. Under the metric, the distance between is given by , where .

Next, we aim to unify the representation of all SRVFs that are within a rotation and/or re-parameterization of each other. To do this, we first note that the SRVF of a rotated curve, , is simply given by . The SRVF of a re-parameterized curve, , is given by , where again denotes the derivative. Briefly returning to the topic of invariance, it is precisely the extra term that makes the metric under the SRVF representation invariant to re-parameterizations. We define equivalence classes of the type and deem all SRVFs within an equivalence class to have the same shape. In other words, each equivalence class represents a unique shape. The resulting shape space, given by all such equivalence classes, is .

The distance between two shapes is defined as the distance between their equivalence classes. To define this distance, we use the metric defined on the pre-shape space:


The optimization problem over

, also called Procrustes rotation, is solved using singular value decomposition (Section 5.7 in

Srivastava and Klassen (2016)). To solve for the optimal re-parameterization one can either use a gradient descent approach (Section 5.8 in Srivastava and Klassen (2016)) or the Dynamic Programming algorithm (Robinson, 2012). In the case of closed curves, one must additionally perform an exhaustive search for the optimal starting point on the shape. The optimal rotation and re-parameterization (minimizers of Equation (2)) solve the registration problem. After registration, one can construct a geodesic path between two shapes by connecting them via a great circle on the pre-shape space . Figure 3 shows two examples of elastic shape geodesics between different types of objects. The length of each path is precisely the shape distance . Visually, the elastic geodesics represent natural deformations between shapes.

Figure 3: Elastic geodesic paths, and corresponding distances, between different shapes.

2.3 Projection onto Tangent Space and Dimension Reduction

To facilitate classification methods naturally designed for linear spaces, such as LDA or QDA, we introduce the exponential map and its inverse to linearize the elastic shape space. Since the pre-shape space is a unit sphere, the expressions for these mappings are analytic. At a given point , there is a (linear) tangent space, . The exponential map is given by (for and )


where is the norm. This expression can be used to map vectors (along geodesics) from a tangent space to the representation space. The inverse of this mapping, called the inverse-exponential map , is given by (for )


where ; this expression can be used to transfer points from the representation space to a linear tangent space. We use the inverse-exponential map to transfer the shapes from the nonlinear shape space to an approximating (linear) tangent space. The left panel of Figure 4 provides an illustration of the exponential and inverse-exponential maps. The right panel shows that the tangent space coordinates depend heavily on the point used to define the tangent space. This, in turn, can have a major impact on statistical tasks performed in the tangent space, including classification.

Figure 4: Left: Two SRVFs and their corresponding tangent space representations , computed using the inverse-exponential map at a point of projection . Right: Two SRVFs and different tangent space coordinates and , respectively, obtained using the inverse-exponential map at two different points of projection and .

Most commonly, the tangent space chosen for statistical shape analysis is defined at the sample mean shape (Le, 2001), called the sample Karcher mean, which is defined using the shape distance given in Equation (2) (for data ):


While this mean is an entire equivalence class, we simply select one element of it for subsequent analysis, i.e., we choose some . The specific that is chosen has no impact on the subsequent analysis.

The computation of the Karcher mean involves an optimization problem which is solved using a gradient descent approach (see, e.g., Kurtek et al. (2013) or Dryden and Mardia (2016)). Given a mean shape, we can define the Karcher covariance and study variability within and across shape classes using tangent Principal Component Analysis (tPCA). As a first step, we project all of the data into the tangent space at the mean using , where is given in Equation (4), , and and are the rotation and re-parameterization of , respectively, that minimize . In principle, there is a sample covariance for the vectors in the infinite dimensional tangent space. In practice, the curves are sampled using a finite number of points, say . Thus, one can simply form the observed tangent data matrix (by stacking the coordinates for each into a long vector of size ), and then compute the Karcher covariance matrix, , using . Note that for typical shape data . The LDA and QDA classification approaches rely on covariance matrices, which are singular in this setting. This necessitates dimension reduction.

(a) (b) (c)
Figure 5: (a) A sample of nine butterfly outlines. (b) First two elastic principal directions of shape variability in the sample (mean shape in red). (c) Scatterplot of the 2D PC coefficients computed in the tangent space at the mean shape (filled-in point).

While one can potentially apply any common statistical technique for dimension reduction to the data matrix , we use the tPCA approach (Dryden and Mardia, 2016; Kurtek et al., 2012). First, we use singular value decomposition to compute , where is an orthonormal matrix with columns specifying the principal directions of shape variation, and

is a diagonal matrix with non-negative entries arranged in decreasing order specifying the principal component variances. Selecting

, one has a lower-dimensional Euclidean representation of the shapes in the tangent space as , with . These PC data are used for tangent space classification of shapes with LDA or QDA. Figure 5 shows an example of averaging and dimension reduction for a sample of nine butterfly shapes. The computed mean shape and first two principal directions of variability are natural summaries of the sample shapes. We also show a projection of the mean (filled-in point) and the data (hollow points) onto these first two principal directions, i.e., a plot of the first two PC coefficients.

3 Classification of Shapes on Tangent Spaces

We describe four different procedures for classification of shapes using Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) in tangent spaces. The first approach is in current use and serves as a baseline. We also discuss practical considerations in the context of data analysis. Throughout this section, we assume that the training data is balanced across classes. The unbalanced case can be handled using reweighting when computing the sample mean shape and by weighting the aggregated likelihoods.

3.1 One Shot Classification on a Single Tangent Space

In this baseline approach to classification (Pal et al., 2017; Srivastava et al., 2011; Kurtek et al., 2012), we begin by computing the overall mean shape , and a PC coefficient representation of the shapes in the tangent space at , using training data pooled over all classes. Since the linearized shape data often have large dimension compared to the amount of available training data, we use tPCA for dimension reduction arriving at dimensions. Under the assumption of normality in this -dimensional space, the log-likelihood of a new observation under QDA is given by


where and

are the mean and covariance estimated using training data in the PC coefficient space of class

. For LDA, we use the pooled estimate of the covariance matrix, , in place of each . After computing for each class, we choose the class with the largest log-likelihood. In this method, we use a single projection point and a single PC space for dimension reduction; for brevity, we refer to this approach as SS. Furthermore, we use a “one-shot” (OS) decision for classification.

3.2 Aggregated Pairwise Classification on Single Tangent Space

In the multiclass case, one can improve upon the SS-OS approach, especially when one class of shapes is very different from the others. The unusual class may be easy to identify, and yet plays a significant role in determining the lower-dimensional PC space. As a result, the estimated PCs may be ineffective in discriminating between the other classes, leading to a higher than needed misclassification rate. This motivates us to introduce an approach based on PC decompositions of all possible pairwise covariance matrices.

Under this approach, we find , the mean shape of the training data pooled over all classes. The shapes are then projected into the tangent space using the inverse-exponential map. For each pair of classes, , PCs are extracted from the covariance matrix computed using training data in classes and only. All training data are then represented in terms of these PCs and the pairwise log-likelihood of a new observation (based on the LDA or QDA model), , is computed. The log-likelihoods are aggregated by taking the mean

. Use of the geometric mean of the likelihoods (arithmetic mean of the log-likelihoods) has a long history in Bayesian statistics and has been used for, e.g., combining expert opinion

(Berger, 2013)

, combining partial Bayes factors

(Berger and Pericchi, 1996), and synthesizing different analyses (Yu et al., 2011). The new observation is then assigned to the class with the maximum mean log-likelihood (OS approach). In this method, we use a single tangent space and pairwise PC spaces for dimension reduction; for brevity, we refer to this approach as SP. In later sections, we demonstrate that this approach can provide significant improvements in classification over the SS-OS method.

3.3 Aggregated Classification on Pairwise Tangent Spaces

The projection from the nonlinear shape space to the linear tangent space distorts inter-shape distances. The amount of distortion depends on multiple factors including the point of projection and the dispersion of the data. Pursuing the heuristics of pairwise comparisons by projection to the tangent space at the sample Karcher mean, we consider projections to all pairwise tangent spaces, followed by aggregation.

Under this approach, for each pair of groups in the training data, , we compute the pairwise mean shape, , and determine the tangent space, . We estimate PCs in using the training data in classes and only. Then, data from all classes are projected into these pairwise tangent spaces, and PC-based means and variances are estimated. This leads to the log-likelihood of a new observation (based on the LDA or QDA model), , defined in each pairwise tangent space. The multiple log-likelihoods are then combined as before using . The new observation is then assigned to the class with the maximum average log-likelihood (OS approach). In this case, we use pairwise tangent spaces and pairwise PC spaces for dimension reduction; for brevity, we refer to this approach as PP. In many cases, this procedure further improves upon the SP-OS method.

3.4 Aggregated Pairwise Classification with Recursion

The distortion induced by a projection point far from a pair of classes can lead to a very small and numerically unstable contribution to the aggregated log-likelihood. If severe enough, the classification rule can be destabilized. The impact of these poor projection points (and PC spaces) can be limited through use of a recursive procedure. We begin with the calculation of for each class and new observation . For the recursion, the class with the smallest mean log-likelihood is identified and dropped, leading to a similar problem with classes. The recursion continues with a succession of problems with fewer classes until a single class remains.

As an example, suppose there are classes. In the first stage, there are different tangent spaces defined at projection points . The mean log-likelihood for class is . If (where ) is the smallest among all mean log-likelihoods, class is dropped from the comparison. Then, we re-aggregate all of the log-likelihoods without projections associated with class . In the second stage, there are projection points that do not include class : . The new mean log-likelihood for class and new observation is given by . If is the smallest re-aggregated log-likelihood among those of all except , class is dropped from the comparison. Again, we re-aggregate the log-likelihoods without projections involving classes and , and repeat this procedure. After repeating it times, we obtain a unique class for the final classification decision. This recursive approach (REC) can be used instead of the OS method described earlier for SP and PP. In general, REC does not provide the same classification decision as OS.

3.5 Practical Considerations

Figure 6: A diagram of different shape classification methods.

In Sections 3.2 and 3.3, we described two methods for multiclass shape classification that use pairwise procedures for dimension reduction to different degrees. Furthermore, in Section 3.4, we proposed a recursive approach for aggregating pairwise classification results. Figure 6 provides a flowchart of all of the approaches, starting with the baseline method SS and progressing to increasingly pairwise procedures (SP and PP). For SP and PP, the user has an additional choice of using the OS or REC decision for classification.

In general, in terms of classification accuracy, PP outperforms SP, which outperforms the baseline SS. However, the three methods have different computational costs, which must be taken into consideration when choosing the best approach for a given multiclass shape classification problem. The main computational bottleneck is the search for the sample Karcher mean, which is performed using an iterative, gradient-based algorithm. The SS and SP approaches require only one such computation, albeit with a potentially large sample size, since they rely on a single tangent space approximation. In contrast to SS which uses a single PC space for classification, SP estimates all pairwise PC spaces, making it computationally more expensive. However, this increase in computational complexity is minimal. On the other hand, PP requires computations of the sample Karcher mean, making it much more computationally expensive, especially when is large. Thus, there is a tradeoff between computational cost and classification accuracy. In addition, for SP and PP, the final classification decision based on aggregated likelihoods can be made via the OS or REC approaches. The OS method is simple and easy to interpret. But, if the training data contains a mix of very similar and very diverse classes, then the REC procedure helps classification performance by eliminating the worst classes in a stepwise manner.

Finally, there are two additional choices in these classification procedures: (1) LDA vs. QDA, and (2) the dimensionality of the PC space. The first choice has been widely explored in the past for various problems, and we do not discuss it here further. The second choice is nontrivial, and there is no single prescription that applies across different problems and datasets. In general, we aim to achieve a low-dimensional yet faithful Euclidean representation of shape data via PCs. We have found that the proposed approaches are very robust to these two choices. This is confirmed in Section 5 for real data examples.

4 Simulation Study

We construct a one-dimensional example that conveys the effect of nonlinear projections on multiclass classification performance. We define three classes, each having a -distribution with a common scale parameter but different location parameters. First, we sample data from these distributions. Next, the sampled data are transformed using a CDF-inverse CDF pair that contracts the tails of the distribution. This step is analogous to the nonlinear projections used for classification of shapes. Then, we apply standard LDA for classification; for a new observation, the classification decision rule chooses the class with the closest sample mean of the transformed data. Specifically, let , be the th random observation drawn from a -distribution with degrees of freedom centered at , i.e., . We transform to using , where is the CDF of the distribution, is the CDF of the standard normal distribution, and is a center point for the transformation.

We begin by sampling random with in each of the three classes and . To investigate the effect of the point on classification performance, we generate the transformed data using various choices for , and compare classification performance based on the overall closest mean and pairwise closest means. Figure 7 displays the results of this simulation. The left, middle and right panels show results of pairwise classification for class 1 vs. class 2, class 1 vs. class 3 and class 2 vs. class 3, respectively. The values of are given on the -axis, while the misclassification rates are on the -axis. The black points mark the misclassification rate for different choices of , the blue point marks the misclassification rate when the overall mean (across all three classes) is used as the center point for the transformation, and the red point marks the misclassification rate when the pairwise mean is used as the center point for the transformation. Use of the pairwise means for classification is superior to use of the overall mean. For example, in (a), we are trying to discriminate between class 1 with and class 2 with . The pairwise mean for this case is equal to (red point) while the overall mean is equal to (blue point). As one moves away from the pairwise mean in either direction, the classification performance deteriorates severely. Aggregating results across the three pairwise problems results in an average misclassification rate of ; the overall mean approach yields a misclassification rate of , which is nearly double. The first simulation shows the benefits of choosing pairwise projection points for classification.

Next, we assess four different LDA approaches: (i) LDA based on a single transformation at the overall mean, (ii) pairwise LDA with OS to determine class assignment, (iii) LDA at the optimally chosen transformation point with OS to determine class assignment, and (iv) pairwise LDA with REC to determine class assignment. For method (iii), we choose from 21 evenly spaced candidate points starting at the smallest mean and ending at the largest mean. Instead of selecting fixed shift parameters

, we randomly sample them from a uniform distribution on

, i.e., . For each classification experiment, we again use .

(a) (b) (c)
Figure 7: Misclassification rates for different choices of with performance using the pairwise mean in red and at the overall mean in blue: (a) class 1 vs. class 2, (b) class 1 vs. class 3, (c) class 2 vs. class 3.
(a) (b) (c)
Figure 8: Excess misclassification rates (-axis) of (a) method (i), (b) method (ii) and (c) method (iii) as compared to the misclassification rate of method (iv) (-axis).

Figure 8 shows 50 replicates (using the above-described settings) for three different comparisons: (a) method (i) vs. method (iv), (b) method (ii) vs. method (iv), and (c) method (iii) vs. method (iv). In each scatterplot, the -axis gives the misclassification rate of method (iv) while the -axis gives the excess misclassification rate defined as the misclassification rate of methods (i), (ii) or (iii) minus the misclassification rate of method (iv). The solid line represents no excess misclassifciation. Points located above the solid line favor method (iv). In panels (a) and (b), method (iv) is clearly superior for easier classification problems. When the misclassification rate reaches approximately or , there appears to be no benefit from using REC as the decision rule as opposed to OS. Panel (c) demonstrates that the recursive method is as good as method (iii), which uses a search for the optimal transformation point . The one-dimensional nature of this simple simulation allows for such a search. Since our focus is on classification of shapes, which lie in an infinite dimensional space, such a search becomes extremely computationally expensive, and is perhaps even impossible.

5 Empirical Studies

We apply the proposed approaches to multiclass classification of two real shape datasets: plant leaves and animals. First, we consider a problem with a relatively small number of classes, by selecting only a few species from the leaf data. Then, we consider the entire datasets of leaves and animals. We begin with a brief description of the two datasets.

5.1 Data Description

We first work with the Flavia Plant Leaf dataset111 (Wu et al., 2007). The closed outlines used in our work were extracted from images of plants captured using a digital camera. The entire dataset consists of 1,907 observations of plant leaves split into 32 different classes corresponding to the species of the plants.

Bai et al. (2009) provide shape data for different animals whose outlines were segmented from natural images. The entire dataset consists of 100 observations for 20 different types of animal: bird, butterfly, cat, cow, crocodile, deer, dog, dolphin, duck, elephant, fish, flying bird, chicken, horse, leopard, monkey, rabbit, mouse, spider, tortoise. In Figure 9(a), we show a single example for each animal class (in the same order as the above list). In Figure  9(b), we show a few examples of cats, monkeys and spiders. We note that there is a lot of pose variability within each class, making the classification problem very difficult.

(a) (b)
Figure 9: (a) One sample of each animal. (b) Examples of cats, monkeys and spiders.

5.2 Classification of a Small Subset of the Leaf Dataset

Figure 10: Top: Sample leaf shapes from four different plant species. Classes 1 (black), 3 (blue) and 4 (green) are similar. Class 2 (red) is easily discernible. Bottom: Plots of the first two PCs in the tangent space at the mean of all four classes (left) and at the mean of the three similar classes (right). Each point represents one leaf.

We consider classification of a small subset of leaves from the Flavia Plant Leaf dataset. To highlight the benefits of the proposed methods, we have selected three classes that are difficult to distinguish (classes 1, 3 and 4 drawn in Figure 10 in black, blue and green, respectively) and one outgroup that is easily distinguishable from the other three classes (class 2 drawn in Figure 10 in red). To assess classification performance, the dataset is randomly split into a training set of 40 leaves from each class, and a test set consisting of the remaining leaves (23 in class 1, 16 in class 2, 20 in class 3 and 16 in class 4). The first two PCs for one training dataset for all classes are plotted in the bottom-left panel of Figure 10. It is evident that class 2 (red points) is very far from the other classes and is easily distinguishable using any reasonable classification method, including LDA and QDA. However, classification among the other three classes is much more challenging as there is considerable overlap between the classes (bottom-right panel of Figure 10).

Class 1 vs. Class 3 Class 1 vs. Class 4 Class 3 vs. Class 4
Projection QDA LDA Projection QDA LDA Projection QDA LDA
1.73 1.70 5.08 4.91 4.93 4.47
2.26 2.50 6.64 6.22 5.30 5.05
2.75 3.12 6.66 15.04 5.47 6.63
2.97 3.64 7.11 6.95 6.53 6.90
2.99 10.51 7.18 7.72 6.88 6.75
3.31 21.44 7.23 11.70 8.26 8.59
3.73 15.62 7.64 13.75 8.83 10.05
4.14 4.01 7.93 8.15 10.00 11.33
4.54 15.31 8.35 8.68 12.09 11.22
4.95 6.06 10.14 22.46 12.41 12.70
7.05 17.57 10.71 17.12 13.11 16.52
12.42 24.93 18.27 26.66 20.16 19.83
Table 1: Average misclassification rate (%) of LDA and QDA based on 25 different random splits of the data. We consider various points of projection for the three pairwise problems for classes 1, 3 and 4. The pairwise mean shape projection point results are highlighted in bold. Projections involving the outgroup are highlighted in red.

Table 1 illustrates the impact of the projection point on the misclassification rate. We provide results for pairwise classification for classes 1, 3 and 4 only, and note that classification involving class 2 is always very good. The table reports average misclassification rates, computed over 25 random splits of the data, for 12 different points of projection. We additionally average the misclassification rates over the different numbers of PCs (two through ten) used to define the lower-dimensional space to provide a single performance summary. The candidate projection points are (mean shape of all training samples across the four classes), (mean shape of each individual class), (all pairwise mean shapes), and (mean shape across classes 1, 3 and 4). The pairwise mean shape projection point results are highlighted in bold, while projection points involving class 2 are highlighted in red. The results indicate that the pairwise mean shape projection points lead to better classification performance. Furthermore, projection points which include the outgroup in the computation of the mean shape perform poorly. The magnitude of the impact of the projection point on the misclassification rate is striking and is consistent across the three pairwise problems.

Overall Projection Pairwise Projections Overall Projection Pairwise Projections
Overall PC Pairwise PCs Pairwise PCs Overall PC Pairwise PCs Pairwise PCs
PCs One Shot Stage 1 Stage 3 Stage 1 Stage 3 One Shot Stage 1 Stage 3 Stage 1 Stage 3
2 25.97 18.51 18.88 10.99 11.15 24.16 19.31 18.45 9.92 10.77
4 22.03 13.33 11.73 8.85 6.19 18.88 12.69 10.29 6.24 4.75
6 17.97 9.92 9.49 6.72 4.80 14.67 9.01 8.05 5.28 4.00
8 16.21 8.85 8.16 5.65 4.27 11.25 7.31 5.92 4.43 4.00
10 13.33 8.11 7.31 5.23 4.21 10.51 7.04 5.71 3.89 3.09
12 11.15 7.25 6.88 4.85 4.48 8.69 6.93 5.07 3.68 3.63
14 9.17 7.31 6.56 4.69 4.43 9.49 7.57 5.55 4.53 3.57
16 8.00 7.52 6.40 4.53 4.21 9.76 7.57 6.13 4.32 3.84
18 6.99 6.67 6.13 4.43 4.37 10.08 8.48 6.88 4.64 4.75
20 6.67 6.72 6.51 4.43 4.48 10.24 9.23 7.84 5.44 6.03
Table 2: Average misclassification rates (%) of LDA and QDA, for the four leaf species datatset, based on 25 random splits of the data into different training and test sets.

Next, we apply the various procedures described in Section 3 to 25 different random splits of the data. We consider both LDA and QDA in PC spaces of different dimension ranging from two to 20. Table 2 presents the average misclassification rates on the test data. As a general trend, the misclassification rate when using LDA decreases as the number of PCs increases. When using QDA, the misclassification rate decreases as the number of PCs increases from two to ten and then increases slightly. These patterns show the interplay between dimension reduction and the complexity of the model being fit. Overall, QDA performs better than LDA, although LDA performs well for the PP approach with an OS decision for classification (Pairwise Projections, Pairwise PCs and Stage 1). The standard approach SS-OS (Overall Projection, Overall PC, One Shot) provides the poorest performance among the different methods.

Table 2 allows us to assess the value of various components of the procedures we have described: choice of projection point (single or pairwise), choice of PC space (single or pairwise), and choice of decision for classification (one shot or recursive). The table shows that the use of pairwise projection points is beneficial in classification based on both LDA and QDA. The comparison of the One Shot columns to the Stage 1 columns shows the value of selecting pairwise PC spaces rather than performing classification in a single PC space based on all training data. Finally, the value of recursion over the one shot approach is demonstrated by the reduction in misclassification rate from Stage 1 to Stage 3.

5.3 Classification of the Entire Leaf Dataset

We now apply all of the proposed procedures to the entire leaf dataset. Since the 32 species of leaves have various shapes, i.e., some classes are very similar while others are very different, we are again interested in investigating the differences in misclassification rate across the various modeling and final decision choices provided by the proposed methods. As in Section 5.2, we use 25 different random splits of the data into training and test sets. We use 40 training samples from each class, and the remaining samples for testing. This results in a balanced training set but an unbalanced test set. The total number of test cases across all classes is 627.

Overall Projection Pairwise Projections Overall Projection Pairwise Projections
Overall PC Pairwise PCs Pairwise PCs Overall PC Pairwise PCs Pairwise PCs
PCs One shot Stage 1 Stage 31 Stage 1 Stage 31 One shot Stage 1 Stage 31 Stage 1 Stage 31
2 54.50 33.60 33.72 24.04 23.30 45.69 24.73 30.33 18.09 19.50
4 41.72 31.20 27.60 21.79 19.42 33.35 18.01 22.02 13.45 14.36
6 32.54 29.14 25.93 20.26 17.91 24.38 15.87 19.27 10.51 12.57
8 30.39 27.73 25.06 18.62 17.23 22.41 14.87 17.42 9.73 11.88
10 29.22 26.67 24.58 17.71 16.96 21.38 15.05 17.58 9.95 12.01
12 27.30 26.02 24.52 17.23 16.87 20.34 15.62 17.86 10.53 12.53
14 26.18 25.42 24.54 16.89 16.82 20.85 16.37 18.71 11.32 13.29
16 25.37 24.91 24.43 16.56 16.82 22.21 17.70 19.37 12.27 14.35
18 24.78 24.71 24.41 16.33 16.81 23.77 19.18 20.21 13.22 15.41
20 24.57 24.38 24.36 16.08 16.82 26.08 20.89 21.75 14.53 16.81
Table 3: Average misclassification rate (%) of LDA and QDA, for the entire leaf dataset of 32 species, based on a 25 random splits of the data into training and test sets.

Table 3 shows the average misclassification rates for LDA and QDA. In general, the misclassification rates are larger than those in Table 2, as expected. However, the proposed methods perform quite well, with PP-OS in an eight-dimensional PC space providing the lowest misclassification rate of only based on the QDA model. We note that methods that use pairwise projection points and pairwise PC spaces perform better than their single projection counterparts. The LDA misclassification rates decrease as the dimensionality of the PC spaces increases. For QDA, the misclassification rates decrease up to a point and then increase slightly. These trends are the same as those observed in the previous section. Overall, QDA performs better than LDA across all methods. We note that the full recursion does not always perform better than the one shot method in this case. For LDA, it reduces the misclassification rate up to around a 14-dimensional PC space. For QDA, the one shot method always performs better. A careful examination of intermediate stages (not included in Table 3) suggests that the recursion may help up to a stage where all outgroups are removed from the data. After that, the log-likelihoods exhibit greater numerical stability and averaging across linearizations provides a modest benefit.

5.4 Classification of the Animal Dataset

This last set of results considers a dataset of various animals, which poses some additional challenges for shape classification. While the shapes of leaves within the same class were very similar, within class variability for the animal shapes is much larger due to very different poses in which the animals were imaged. We use 25 different random splits of the data into training and test sets of sizes 60 and 40, respectively. Thus, the total number of test cases across all classes is 800.

Overall Projection Pairwise Projections Overall Projection Pairwise Projections
Overall PC Pairwise PCs Pairwise PCs Overall PC Pairwise PCs Pairwise PCs
PCs One shot Stage 1 Stage 19 Stage 1 Stage 19 One shot Stage 1 Stage 19 Stage 1 Stage 19
12 62.99 61.52 59.54 45.67 53.49 54.58 48.77 49.05 34.40 41.60
14 61.17 60.59 58.90 44.50 52.75 52.95 47.68 48.07 33.28 41.14
16 60.51 59.85 58.20 43.41 51.87 51.67 47.10 47.45 32.68 40.48
18 59.85 59.22 57.47 42.24 51.14 50.64 46.53 46.38 32.39 40.21
20 59.08 58.40 56.58 41.37 50.08 50.49 46.21 46.13 32.31 40.49
22 58.17 57.69 56.29 40.55 49.35 50.58 46.06 46.27 32.40 40.84
24 58.14 57.07 55.95 39.82 48.97 50.63 46.47 46.41 32.84 41.58
26 57.49 56.54 55.69 39.16 48.63 50.85 46.87 46.82 33.53 42.32
28 56.96 56.15 55.43 38.61 48.37 51.40 47.34 47.38 34.48 43.74
30 56.47 55.83 55.28 37.95 47.98 51.85 48.12 47.88 35.57 44.82
Table 4: Average misclassification rate (%) of LDA and QDA, for the animal dataset, based on 25 random splits of the data into training and test sets.

Table 4 shows the average misclassifiction rates for this example. The overall patterns are very similar to the leaf dataset example. However, the misclassification rates in this case are much worse. Overall, QDA performs better than LDA, and the PP procedure leads to smaller misclassification rates than the SP method. The baseline SS method is the worst, as before. The recursion helps only for the SP method and hurts for the PP methods. One thing of note is that many more PCs are required in this example to achieve good classification performance, a result of larger variability within and across classes.

6 Discussion

An important step in shape analysis is the move from the infinite dimensional, curved space where shapes naturally abide to a finite dimensional linear space that allows the use of a suite of standard statistical tools. In this article, we have shown that this linearization is not trivial, and that the details of the linearization can have a major impact on subsequent statistical inference.

The linearization consists of two main components: a projection point to determine a tangent space and dimension reduction by choice of PCs. We propose aggregation as a mechanism to make use of multiple linearizations driven by different PCs. Aggregation allows us to focus on the pairwise classification problem where the existing literature provides a sound heuristic for the linearization. By itself, the use of pairwise PCs followed by aggregation provides a substantial benefit.

Additionally, we note that the presence of an outgroup can harm the aggregation by contributing linearizations that have little relevance for the classes to which an observation might plausibly be assigned. We propose a recursion that can be quickly computed from the results of the aggregation. The recursion has proven successful for a problem with a modest number of classes and a clear outgroup. For problems with a profusion of classes and great within-class variation, the recursion harms performance.

This work opens up a number of problems, several of which we are in the process of examining. One is the choice of projection points for pairwise comparisons. A sensible alternative to the pairwise mean is the midpoint of the geodesic between the two classwise mean shapes. Although not exactly the same, these two types of projection points are typically close to one another. This will allow us to greatly reduce computational cost: we only have to compute mean shapes instead of . A second is whether there are more effective ways to select the PCs for a given projection and pairwise classification problem. A third is to delineate the circumstances under which the recursion is beneficial. A fourth is whether the recursion can be modified to retain aggregation over a subset of linearizations. In future work, we expect to consider these and other problems.

Acknowledgements: We thank Dr. Hamid Laga from Murdoch University for providing the outlines from the Flavia Plant Leaf dataset. This research was partially supported by NSF DMS 1613110 (to SM), and NSF DMS 1613054, NSF CCF 1740761 and NIH R37 CA214955 (to SK).


  • Bai et al. (2009) Bai, X., W. Liu, and Z. Tu (2009). Integrating contour and skeleton for shape classification. In IEEE International Conference on Computer Vision Workshops, pp. 360–367.
  • Berger (2013) Berger, J. O. (2013). Statistical Decision Theory and Bayesian Analysis. Springer.
  • Berger and Pericchi (1996) Berger, J. O. and L. R. Pericchi (1996). The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association 91(433), 109–122.
  • Cootes et al. (1995) Cootes, T. F., C. J. Taylor, D. H. Cooper, and J. Graham (1995). Active shape models - their training and application. Computer Vision and Image Understanding 61(1), 38–59.
  • Dryden and Mardia (1992) Dryden, I. L. and K. V. Mardia (1992). Size and shape analysis of landmark data. Biometrika 79(1), 57–68.
  • Dryden and Mardia (2016) Dryden, I. L. and K. V. Mardia (2016). Statistical Shape Analysis: With Applications in R. Wiley.
  • Glaunès et al. (2008) Glaunès, J., A. Qiu, M. I. Miller, and L. Younes (2008). Large deformation diffeomorphic metric curve mapping. International Journal of Computer Vision 80(3), 317–336.
  • Grenander and Miller (1998) Grenander, U. and M. I. Miller (1998). Computational anatomy: An emerging discipline. Quarterly of Applied Mathematics LVI(4), 617–694.
  • Joshi et al. (2007) Joshi, S. H., E. Klassen, A. Srivastava, and I. H. Jermyn (2007). A novel representation for Riemannian analysis of elastic curves in . In

    IEEE Conference on Computer Vision and Pattern Recognition

    , pp. 1–7.
  • Kendall (1984) Kendall, D. G. (1984). Shape manifolds, Procrustean metrics, and complex projective spaces. Bulletin of the London Mathematical Society 16(2), 81–121.
  • Klassen et al. (2004) Klassen, E., A. Srivastava, W. Mio, and S. H. Joshi (2004). Analysis of planar shapes using geodesic paths on shape spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(3), 372–383.
  • Kurtek et al. (2012) Kurtek, S., A. Srivastava, E. Klassen, and Z. Ding (2012). Statistical modeling of curves using shapes and related features. Journal of the American Statistical Association 107(499), 1152–1165.
  • Kurtek et al. (2013) Kurtek, S., J. Su, C. Grimm, M. Vaughan, R. Sowell, and A. Srivastava (2013). Statistical analysis of manual segmentations of structures in medical images. Computer Vision and Image Understanding 117(9), 1036–1050.
  • Laga et al. (2012) Laga, H., S. Kurtek, A. Srivastava, M. Golzarian, and S. J. Miklavcic (2012). A Riemannian elastic metric for shape-based plant leaf classification. In International Conference on Digital Image Computing Techniques and Applications, pp. 1–7.
  • Le (2001) Le, H. (2001). Locating Fréchet means with application to shape spaces.

    Advances in Applied Probability

     33(2), 324–338.
  • Malladi et al. (1996) Malladi, R., J. A. Sethian, and B. C. Vemuri (1996). A fast level set based algorithm for topology-independent shape modeling. Journal of Mathematical Imaging and Vision 6, 269–290.
  • Mio et al. (2007) Mio, W., A. Srivastava, and S. H. Joshi (2007). On shape of plane elastic curves. International Journal of Computer Vision 73(3), 307–324.
  • Pal et al. (2017) Pal, S., R. P. Woods, S. Panjiyar, E. R. Sowell, K. L. Narr, and S. H. Joshi (2017). A Riemannian framework for linear and quadratic discriminant analysis on the tangent space of shapes. In

    Workshop on Differential Geometry in Computer Vision and Machine Learning

    , pp. 726–734.
  • Pizer et al. (2013) Pizer, S. M., S. Jung, D. Goswami, J. Vicory, X. Zhao, R. Chaudhuri, J. N. Damon, S. Huckemann, and J. S. Marron (2013). Nested sphere statistics of skeletal models. In Innovations for Shape Analysis: Models and Algorithms, pp. 93–115.
  • Robinson (2012) Robinson, D. T. (2012). Functional Data Analysis and Partial Shape Matching in the Square Root Velocity Framework. Ph. D. thesis, The Florida State University.
  • Srivastava et al. (2011) Srivastava, A., E. Klassen, S. H. Joshi, and I. H. Jermyn (2011). Shape analysis of elastic curves in Euclidean spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(7), 1415–1428.
  • Srivastava and Klassen (2016) Srivastava, A. and E. P. Klassen (2016). Functional and Shape Data Analysis. Springer.
  • Wu et al. (2007) Wu, S. G., F. S. Bao, E. Y. Xu, Y.-X. Wang, Y.-F. Chang, and Q.-L. Xiang (2007).

    A leaf recognition algorithm for plant classification using probabilistic neural network.

    In IEEE International Symposium on Signal Processing and Information Technology, pp. 11–16.
  • Younes (1998) Younes, L. (1998). Computable elastic distances between shapes. SIAM Journal on Applied Mathematics 58(2), 565–586.
  • Yu et al. (2011) Yu, Q., S. N. MacEachern, and M. Peruggia (2011). Bayesian synthesis: Combining subjective analyses, with an application to ozone data. Annals of Applied Statistics 5(2B), 1678–1698.
  • Zahn and Roskies (1972) Zahn, C. T. and R. Z. Roskies (1972). Fourier descriptors for plane closed curves. IEEE Transactions on Computers 21(3), 269–281.