 # Statistical shape analysis in a Bayesian framework for shapes in two and three dimensions

In this paper, we describe a novel shape classification method which is embedded in the Bayesian paradigm. We discuss the modelling and the resulting shape classification algorithm for two and three dimensional data shapes. We conclude by evaluating the efficiency and efficacy of the proposed algorithm on the Kimia shape database for the two dimensional case.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 0.1 Introduction

Shape is an important feature of objects; it can be used in many applications such as the recognition and classification of objects in images. In the approach we take we represent such objects and their boundaries as continuous planar curves (i.e. one-dimensional lines which denote the outline of the object) and study their shapes. Our goal is to develop shape models, statistical procedures and classification methods of continuous planar shape curves and establish the statistical framework needed for their classification. In particular, we study how to classify shapes that are generated by such curves and how we can probabilistically assign them into their respective categories; given a set of pre-determined classes we would like to classify the observed data shapes – we here define a

data shape to be one of the shapes that we observed i.e. an ordered set of points in or . These questions occur in many applications of shape modelling and thus are of broad interest.

## 0.2 Modelling and classification

The problem of classification can be mathematically formulated as the posterior probability of the class in question given the observed data; that is by

where the class of the object and

the set of all the observed data shapes. In a Bayesian framework, classification is performed by maximising the posterior probability of the class which by Bayes’ theorem is:

. For simplicity we choose the prior over the classes to be uniform although it can be freely chosen. The major task is then to calculate the likelihood which we partition over nuisance parameters that correspond to the data formation process; this implies the marginalisation of the likelihood over similarity transformations, namely translations, scales and rotations , bijections , shape curves , sampling functions and the inherent observational noise

. For our applications we choose the observational model to represent errors in shape point collection as additive Gaussian white noise so that the likelihood function for the complete data is given by:

 P(y|C)=∑b∈B∫Dβ Ds Dg dσ P(b)P(s)P(g)P(σ)P(β|C)×exp(−12σ2n∑i=1|yi−g∘β(s(b−1i))|2) (1)

with a number of simplifying independence assumptions made. In order to estimate the posterior probability of a class, one should evaluate the sums and integrals over the nuisance parameters. In the next sections we discuss our computational strategies for dealing with these evaluations for both two and three dimensional shape data.

## 0.3 The two dimensional case

Our main goal is to evaluate the integrals in expression (1) and thus perform Maximum a Posteriori (MAP) of a certain class given the data. In previous work, e.g. Dryden and Mardia (1998) and Srivastava and Jermyn (2009) the integrations over the nuisance parameters were evaluated numerically by a zeroth order Laplace approximation. In Tsiftsi et al. (2014) we introduced an analytic way of carrying out the group integrations and the integrations over resulting in a closed form expression.

To achieve that, we had to make an appropriate and statistically significant choice of priors. Initially, we used Jeffreys’ joint prior for and however due to induced divergences a regularized version was employed. Although this broke the invariance of the original posterior, the result of this integration was found to be:

 P(y|b,β,s) ×P(b)P(s)P(β|C) (2)

where are appropriate regulators, the normalisation constant, the sample points and . For details regarding the priors and the calculations refer to Tsiftsi et al. (2014). The proposed algorithm returns high classification rates. To demonstrate its efficiency we encounter an example on two shape databases in section 0.5.

## 0.4 The three dimensional case

Another problem of interest is the generalisation of the previous case to its three dimensional equivalent by assuming that . The three dimensional case is treated in a similar way as the two dimensional case: our goal is to classify a shape by performing MAP on the class . We follow the same steps as in the two-dimensional case and we marginalise the likelihood over the nuisance parameters that take part in the data formation process however similarity transformations are now represented by since .

The challenge is the analytic evaluation of the integrals of the marginalised likelihood and especially the integration over three-dimensional rotations. Initially, translations were integrated against the uniform Haar measure. The result, as expected, was analogous to the two dimensional case and had to be integrated with respect to rotations. For the integration, we chose to represent rotations as unit quaternions; the full quaternionic space is described by: with the three special unit imaginary quaternions. An important point about quaternions, is the fact that they do not commute. The basis quaternions anti-commute and they provide a representation of . The value of the commutator is , with and their vectorial parts.

For the integration over quaternions, we choose to integrate over the full quaternionic space and impose a constraint that takes into account only unit quaternions. Since unit quaternions live on the surface of the unit 4-sphere, we impose the constraint . This -function is invariant under the action of on the parameters since rotations act by isometries and thus do not change the length of the quaternions. Integrating the result of translations over rotations by imposing the constraint one has:

 P(y|b,β,s,σ) =∫d4q δ(|q|2−1) exp(|∑NiYi|22nσ2−∑Ni|Yi|22σ2) (3)

To perform the integration, we replace the -function by its Fourier equivalent which introduces a second integration that can be simplified to:

 P(y|b,β,s,σ) =∬dk d4q exp(ik(|q|2−1))exp(|∑NiYi|22nσ2−∑Ni|Yi|22σ2) =12π∬dk d4q exp(−ik)exp(4n [qTM(k)q]) (4)

where the symmetrised, positive definite covariance matrix of the components.

We now discuss the integral over the quaternionic parameters that generate the rotations. We followed Wood (1993) and calculated the appropriate Haar measure for the quaternionic representation which was proven to be related to the normalisation constant of the Bingham distribution. The result of integrating over will supply the Haar measure on the space of unit quaternions and restrict our parameters to this surface. By diagonalising we can rewrite the exponent of expression (4) as:

 exp(4n [qTM(k)q])=∫S3exp(4n ∑iλi~q2i d[~q]) (5)

Here, the generate rotations in

which will be uniformly distributed

if and only if the are uniform on a unit hemisphere in . This means that choosing the usual uniform measure on for induces the Haar measure on the space of rotations. This ensures that the chosen measure in (4) is the appropriate one and induces invariance under the act of rotations so that we do not favour one rotation over another.

Returning to expression (4), it is easy to see that the integral with respect to

refers to a multivariate Gaussian distribution. Assuming that the eigenvalues of matrix

are negative the evaluation of the quaternionic integral of this multivariate Gaussian distribution is:

 P(y|b,β,s,σ) =12π∬d4q dk exp(−ik)exp(4n [qTM(k)q]) ∝12π∫dk exp(−ik)4 n π2√det(M) (6)

with det(M) the determinant of matrix which has dependence and is invariant to rotations of since it has been written in a manifestly rotationally invariant way. It is common practice to evaluate integrals of this form by contour integration. For this we would have to promote k to the complex plane and choose an appropriate path in the -plane. Since the expression of the determinant is not a perfect square, the presence of the square root in the denominator instead of turning points at which the denominator vanishes into poles, it turns them into branch cuts making the integration over these extremely difficult. Thus contour integration cannot be of help and the integral over cannot be done analytically.

We were thus forced to Taylor expand the square root in the denominator of expression (6) in order to be able to analytically approximate the integral. However, this represents an important step towards generalising our work on planar shapes to three-dimensional curves. The calculations of the integration of the remaining nuisance parameters are challenging, although positive developments have been made towards a series solution. We leave the remaining calculation for future consideration as an extension of the analysis presented here. This work is still in progress but shows promising signs of improving upon the current shape classification methods in three-dimensions.

## 0.5 Example in two-dimensions

In order for the algorithm’s efficacy on the classification of two-dimensional data shapes to be tested and verified, examples from two shape databases were considered: the Kimia and a simulated letter database. In the latter case the application of our algorithm comes with a warning; ordinarily the orientation of letters is crucial (for example W versus M and C versus U) whereas our likelihood has been constructed to be invariant under rotations of the data. The tests on this database should be understood as a general test of our algorithm which is used for demonstrational purposes and not as a serious proposal for recognition of written letters.

Both databases were comprised of binary images which were used for training and testing purposes. The shapes’ boundaries were extracted by MATLAB built-in functions and simulated shapes played the role of the observed data sets. The proposed algorithm was tested on the simulated data sets and its classification results are very positive, as is illustrated in Figure 1.

For the Kimia database we found that for 10 runs of 10 shapes each, the average classification level was with the average success rate being more than . From these experiments we concluded that the number of sampled points is crucial since as soon as the number of points increases to more than 50 the confidence levels become almost 90 percent. For the alphabet database, the results for the average classification level were with the average success rate . The evaluation of the performance of the algorithm in three-dimensions could be tested by using examples from 3D geological sand formations as previously discussed in Tsiftsi et al (2014). Figure 1: Classification results for a Kimia shape and the letter E.
Dryden, I.L. and Mardia, K

(1998). Statistical shape analysis. J. Wiley.

Kimia, B.B

(2015), Kimia database. Available at www.lems.brown.edu/ dmc/.

Srivastava, A. and Jermyn, I.H.

(2009), Looking for shapes in 2D cluttered point clouds, IEEE Trans. Patt. Anal. Mach. Intell., 31(9), 1616 – 1629.

Tsiftsi, T. , Jermyn, I. and Einbeck, J.

(2014) Bayesian shape modelling of cross-sectional geological data, in 29th International Workshop on Statistical Modelling, 14-18 July 2014, Goettingen, Germany; proceedings, Amsterdam: Statistical Modelling Society, 161 – 164,
[arXiv:1802.09631 [stat.ME]].

Wood, A.T.A.

(1993), Estimation of the concentration parameters of the Fisher matrix distribution on and the Bingham distribution on , Australian Journal of Statistics, 35 (1), 69 – 79.