0.1 Introduction
Sand bodies, the sedimentary, subterranean remnants of ancient rivers, are important to both geology and the petroleum industry. In particular, their crosssectional shapes help determine their oilbearing capacity. Current classification schemes for sand body shapes are qualitative, simple, and ad hoc, and so there is a need for a quantitative analysis with the help of statistical models. There are several problems of interest: estimation of shape class parameters given labelled data shapes (a ‘data shape’ is an ordered set of points in ); classification of new data shapes; and unsupervised classification. Parameter estimation is described by the probability , where denotes the shape class parameters and the dataset, which consists of several data shapes, together with their class labels
. By Bayes’ theorem, this is given by:
(1) 
In this, as in all of the above problems, the major task is to calculate the likelihood . This is the problem addressed in this paper. The problem is not unique to the sand body application: it occurs in many applications of shape modelling, and is thus of broad interest.
0.2 The likelihood
The calculation of the likelihood is complicated due to the presence of many nuisance parameters that must be integrated over. The partitioned likelihood is:
(2) 
where we have made a number of simplifying independence assumptions. Here
models errors in shape point collection as Gaussian white noise.
In the above expression, is the underlying sand body shape modulo similarity transformations, which comes from a class with parameters , while is a similarity transformation generating the full sand body shape . Data formation is modelled as a sampling of points around the sand body shape, and a bijection relating each point of the sand body shape to a unique point of the data shape, giving
, plus the above Gaussian noise with variance
.In previous work, e.g. Dryden and Mardia (1998) or Srivastava and Jermyn (2009), an algorithmic approach was taken to the integrals over the group , using the Procrustes algorithm to compute a zeroth order Laplace approximation. Here we carry out the group integrations, and the integration over , analytically, resulting in a closed form expression. This is the main contribution of this paper.
First, we have to choose the priors for and . Jeffrey’s joint prior for and was calculated to be , but this leads to a nonnormalizable likelihood. Instead, a regularized version was employed. A Gaussian prior was used for translations; a uniform prior for rotation angle; and a Rayleigh prior for scaling. These choices break translation invariance by effectively limiting the size of the twodimensional domain in in which the shape points lie, and break scale invariance by effectively limiting the range of scales considered. With these priors, the result of the integrations over translations, rotations, and scalings is:
(3) 
where:
which are regularized versions of the number of points and the variance. are appropriate regulators and the normalization constant.
A prior was used for . After integration, this leads to:
(4) 
This expression is the main result of the paper. It applies to any shape modelling application in which white Gaussian noise is added to a discrete set of shape points.
To finish the calculation of the likelihood, we have to perform the and integrations, and the summation, in equation (2
). This we do using Monte Carlo techniques. We use a uniform distribution on
for samplings , while , which consists of positive quantities such as aspect ratios and lengths, is modelled with distributions, whose parameters over all classes constitute . In previous work, when the group integrations were approximated using the Laplace approximation and the Procrustes algorithm, the sum over bijections could be approximated, again in a zeroth order Laplace estimation, using the Hungarian algorithm, since only a linear assignment problem was involved. In the integrated likelihood derived here, the terms involving complicate the situation, and turn the linear assignment problem into quadratic assignment, which is NPhard. Instead of using the Laplace approximation, we approximate the full summation using Monte Carlo, with a uniform prior on .0.3 Parameter estimation
The above result can be used to estimate the class parameters given data shapes from each class. Figure 1
shows an example of a likelihood surface, in this case computed for the simplest case of a rectangle modulo similarities. This is a onedimensional shape space that can be parametrized by the aspect ratio. There is thus one Gamma distribution involved, and
is twodimensional. The surface is rough, since only a coarse grid in parameter space was used to reduce simulation time.Maximum likelihood estimation was carried out using a builtin Matlab optimization function. The algorithm correctly navigated towards the maximum of the likelihood surface, but convergence was slow. We hope to improve this in the future by using a different maximization algorithm.
0.4 Conclusion
The main contribution of this paper is the analytical evaluation of the integrals over the similarity group and the noise variance in a model for shape data. The application considered here is to the classification of the crosssectional shapes of sand bodies, but the same techniques apply to any shape model involving Gaussian noise. This work is still in progress but shows promising signs of improving the current classification and estimation methods employed by geologists. There are technical obstacles in the form of numerical integrations which we hope to overcome in the near future.
 Dryden, I.L. and Mardia, K

(1998). Statistical shape analysis. J. Wiley.
 Srivastava, A. and Jermyn, I.H.

(2009) Looking for shapes in twodimensional, cluttered point clouds. IEEE Trans. Patt. Anal. Mach. Intell., 31(9), 1616 – 1629.
Notes
In the manuscript version published in the conference proceedings volume, there was a small error in expression (4). This is now corrected.
Comments
There are no comments yet.