Image segmentation denotes the task of partitioning an image in its constituent parts. Feature based segmentation looks at distinctive characteristics (features) in the image, grouping similar pixels into clusters which are meaningful for the application at hand. Typical examples of features are based on greyscale/RGB intensity and texture. Mathematical methods for image segmentation are mainly formalised in terms of variational problems in which the segmented image is a minimiser of an energy. The most common image feature encoded in such energies is the magnitude of the image gradient, detecting regions (or contours) where sharp variations of the intensity values occur. Examples include the Mumford-Shah segmentation approach , the snakes and geodesic active contour models [39, 17]. Moreover, in 
Chan and Vese proposed an instance of the Mumford-Shah model for piecewise constant images whose energy is based on the mean greyvalues of the image inside and outside of the segmented region rather than the image gradient and hence does not require strong edges for segmentation. The Chan-Vese model has been extended for vector-valued images such as RGB images in. Other image segmentation methods have been considered in [40, 27]. They rely on the use of the total variation (TV) seminorm , which is commonly used for image processing tasks due to its properties of simultaneous edge preservation and smoothing (see ).
The non-smoothness of most of the segmentation energies renders their numerical minimisation usually difficult. In the case of the Mumford-Shah segmentation model the numerical realisation is additionally complicated by its dependency on the image function as well as the object contour. To overcome this, several regularisation methods and approximations have been proposed in the literature, e.g. [4, 11, 12, 68]
for Mumford-Shah segmentation. In the context of TV based segmentation models the Ginzburg-Landau functional has an important role. Originally considered for the modelling of physical phenomena such as phase transition and phase separation (cf. for a survey on the topics) it is used in imaging for approximating the TV energy. Some examples of the use of this functional in the context of image processing are [27, 25, 26], which relate to previous works by Ambrosio and Tortorelli on diffuse interface approximation models [5, 4].
Such variational methods for image segmentation have been extensively studied from an analytical point of view and the segmentation is usually robust and computationally efficient. However, variational image segmentation as described above still faces many problems in the presence of low contrast and the absence of clear boundaries separating regions. Their main drawback is that they are limited to image features which can be mathematically formalised (e.g. in terms of an image gradient) and encoded within a segmentation energy. In recent years dictionary based methods have become more and more popular in the image processing community, complementing more classical variational segmentation methods. By learning the distinctive features of the region to be segmented from examples provided by the user, these methods are able to segment the desired regions in the image correctly.
In this work, we consider the method proposed in [10, 29, 44, 43] for image segmentation and labelling. This approach goes beyond the standard variational approach in two respects. Firstly, the model is set up in the purely discrete framework of graphs. This is rather unusual for variational models where one normally considers functionals and function spaces defined on subdomains of in order to exploit properties and tools from convex and functional analysis and calculus of variations. Secondly, the new framework allows for more flexibility in terms of the features considered. Additional features like texture, light intensity or others, can be considered as well without encoding them in the function space or the regularity of the functions. Due to the possibly very large size of the image (nowadays of the order of megapixel for professional cameras) and the large number of features considered, the construction of the problem may be computationally expensive and often requires reduction techniques [54, 53, 28]. In several papers (see, e.g., [61, 63, 34]) the segmentation problem was rephrased in the graph framework by means of the graph cut objective function. Follow-up works on the use of graph-based approaches are, for instance, [45, 46] where an iterative application of heat diffusion and thresholding, also known as the Merriman-Bence-Osher (MBO) method  is discussed for binary image labelling, and  where the Mumford-Shah model is reinterpreted in a graph setting.
In this paper, we also address the problem of detection of objects with geometrical properties that are a priori known. An example is the detection of lines and circles. These objects can be identified by mapping them onto an auxiliary space where relevant geometrical properties (such as linear alignment and roundness) are represented as peaks of specific auxiliary functions. In this work, we use the Hough transform  to detect measurement tools (rulers, concentric circles of fixed radii) with the intent of providing quantitative, scale-independent measurements of the region segmented by one of the techniques described above. In this way, an absolute measurement of the region of interest in the image is possible, independent of the scale of the image, which could depend, for instance, on the distance of the objective to the camera.
We demonstrate the use of our method in the context of real world applications in which segmentation and subsequent object measurement are crucial. Our main application is the measurement of the white forehead patch (blaze) of male pied flycatchers, which has been studied with regard to sexual selection in , see Figure 1. The forehead patch is known to vary between individuals  and can be subject to both intra-  and intersexual  selection with pied flycatchers from Spain preferring males with large patches. Forehead patch size has been shown to signal male phenotypic quality through plasma oxidative stress and antioxidant capacity . However, in all studies to date the measurements of patches have been inconsistent and generally inaccurate. For example some studies have simply measured patch height , whereas Potti and Montalvo  assumed the shape to be a trapezium with area equal to , being the white patch width, the bill width and the height of the white patch. Morales et al.  measured the length and breadth of the forehead patch with callipers to the nearest and its size () was calculated as the area of a rectangle. Other studies have measured the patches from photographs, e.g., Järvistö et al.  Ruuskanen et al.  and Sirkiä et al.  who photographed the forehead with a scale bar included in each picture, and measured the patch as the white area in using IMAGEJ software 
. But none of these three papers provide methods of how the measurement was actually achieved, e.g., whether patches were delineated or roughly estimated with a simple shape. Most recently Moreno et al. analysed digital photos of forehead patches with Adobe PhotoShop CS version 11.0. relating the distance of on the ruler to number of pixels, and used this to estimate length. Zooming to 400% and using the paintbrush tool with 100% hardness and 25% spacing the authors delineate the patch and measure the area of the white areas on forehead. While this is the best measurement method to date, it still is subject to human measurement error and subjective assessment of patch boundaries. We report some segmentation results obtained by manual selection and polygon fitting in Figure 2. In this manuscript we use a mathematically robust approach to segment the blaze independently to provide an accurate measurement of forehead patch area.
A similar challenge can be encountered in medical applications monitoring and quantifying the evolution of skin moles for early diagnosis of melanoma (skin cancer). A normally user-dependent measurement of the mole is performed using a ruler located next to it. A picture is then taken and used for future comparisons and follow-up, see Figure 3 and compare [18, 1] for previous attempts of automatic detection of melanomas. For such an application, a systematic quantitative analysis is also required
***Mole images from
©Chassenet/Science Photo Library ,
http://en.wikipedia.org/wiki/Melanoma (public domain),
http://www.diomedia.com/stock-photo-close-up-of-a-papillomatous-dermal-nevus-mole-a-raised-pigmented-skin-lesion-that-results-from-a-proliferation-of-benign-melanocytes-c-cid-image14515019.html, ©Phototake RM/ ISM.
In several other applications the task of measuring objects directly from the image is encountered. These include zoological and behavioural studies arising in the animal world where detecting size, shape and possible symmetries of specific distinctive animal features can be useful, as well as, for instance, in archaeological digs where the measurement of finds is important for comparisons and classification .
Outline of the method.
We consider the image as a graph whose vertices are the image pixels. Similarity between pixels in terms of colour or texture features is modelled by a weight function defined on the set of vertices. Our method runs as follows. Firstly, using examples provided by the user (dictionaries) as well as matrix completion and operator splitting techniques, the segmentation of the region of interest is performed. In the graph framework, this corresponds to cluster together pixels having similar features. This is obtained by minimising on the graph the Ginzburg-Landau functional typically used in the continuum setting to describe diffuse interface problems. In order to provide quantitative measurements of the segmented region, a second detection step is then performed. The detection here aims to identify the distinctive geometrical features of the measurement tool (such as line alignment for rulers or circularity for circles) to get the scale on the measurement tool considered. The segmented region of interest can now be measured by simple comparisons and quantitative measurements such as perimeter and area can be provided.
We propose a self-contained programme combining automated detection and subsequent size measurement of objects in images where a measurement tool is present. Our approach is based on two powerful image analysis techniques in the literature: a graph segmentation approach which uses a discretised Ginzburg-Landau energy  for the detection of the object of interest and the Hough transform  for detecting the scale of the measurement tool. While these methods are state of the art, their combination for measuring object size in images proposed in this paper is new. Moreover, to our knowledge there is only little contribution in the literature that broach the issue of how the graph segmentation approach as well as the Hough transform are applied to specific problems [29, 44, 33]. Indeed, here we present these methodologies in detail, especially discussing important aspects in their practical implementation, and demonstrate the robust applicability of our programme for measuring the size of objects, showcasing its performance on several examples arising in zoology, medicine and archaeology. Namely, we first apply our combined model for the measurement of the blaze on the forehead of male pied flycatchers, for which we run a statistical analysis on the accuracy and predicted error in the measurement on a database of thirty images. State-of-the-art methods for such a task typically require the user to fit polygons inside or outside the blaze  or to segment the blaze by hand . Similarly, the scale on the measurement tool is typically read from the image by manually measuring it on the ruler. With respect to medical applications, we apply our combined method for the segmentation and measurement of melanomas. Although efficient segmentation methods for automatised melanoma detection already exist in literature (see, e.g., [18, 1]), up to the knowledge of the authors no previous methods providing their measurement by detecting the scale on the the ruler placed next to them (see Figure 3) exist. Conversely, in the case of archaeological applications, some models for the automatic detection of the measurement tool in the image exist  but no automatic methods are proposed for the segmentation of the region of interests. A free release of the MATLAB code used to compute the results will be made available after the zoological analysis of the pied flycatcher’s data based on our segmentation and measurement has been completed, .
Organisation of the paper.
In Section 2 we present the mathematical ingredients used for the design of the graph based segmentation technique used in [10, 29, 44, 43]. They come from two different worlds: the framework of diffusion PDEs used for modelling phase transition/separation problems (see Section 2.1) and graph theory and clustering, see Section 2.2. In view of a detailed numerical explanation, we also recall a splitting technique and a popular matrix completion technique used in our problem to overcome the computational costs. In Section 3 we explain how the geometrical Hough transform is used to detect the scale in an image. Finally, Section 4 contains the numerical results obtained with our combined method applied to the problems described above. For completeness, we give some details on the Nyström matrix completion technique in Appendix A and a review of the Hough transform for line and circle detection in Appendix C.
2 Image segmentation as graph clustering
We present in this section the mathematical background for the design of the Ginzburg-Landau (GL) graph segmentation algorithm introduced in . There, the image segmentation problem is rephrased as a minimisation problem on a graph defined by features computed from the image. Compared to the methods above, the graph framework allows for more freedom in terms of the possible features used to describe the image, such as texture.
2.1 The Ginzburg-Landau functional as approximation of TV
In the following, we recall the main properties of the original continuum version of the GL functional explaining its importance in the context of image segmentation problems as well as the main concepts of graph theory which will be used for the segmentation modelling.
Several physical problems modelling phase transition and phase separation phenomena are built around the well-known GL functional:
The functional above is defined in the continuous setting. Here, represents a open subset of , is the density of a two-phase material and is a double-well potential, e.g. . The two wells of correspond to the two phases of the material. The parameter is the spatial scale. Variational models built around this functional are also referred to as diffuse interface models because of the interface appearing between the two regions containing the phases (i.e. the two wells of ) due to the competition between the two terms of the functional (2.1). Nonetheless, some smoothness preventing from having jumps between the two regions is ensured by the first regularisation term.
The use of the GL functional has become very popular in image processing due to its connections with the total variation (TV) seminorm. In [48, 49], for instance, -convergence properties of (2.1) to the TV functional are shown. Thus, the GL functional is very often used as a quadratic approximation of total variation. Fast numerical schemes relying on these connections have been designed for many imaging problems, thus overcoming the issues related to nonsmooth TV minimisation [5, 27, 19]. In image processing, the functional considered often is of the form
where is a fidelity term measuring the distance of the reconstructed image to the given image . Depending on the application, different data fidelities are employed. Typically, they are related to statistical and physical assumptions of the model considered. Standard examples of fidelity terms are . The parameter determines the influence of the data fit compared to the regularisation. Taking the gradient descent of (2.2) we get the following evolutionary PDE, known in the literature as the Allen-Cahn equation  with an additional forcing term due to the fidelity :
Steady states of equation (2.3) are the stationary points of the energy in (2.2). Note that is not convex so uniqueness is not guaranteed and, consequently, the long time behaviour for solutions of (2.3) will depend on the initial condition. The linear diffusion term weighted by appearing in (2.3
) allows for fast solvers using for instance the Fast Fourier Transform (FFT) which translates the Laplace operator into a multiplication operator on the Fourier modes.
2.2 Towards the modelling: the graph framework
for high-dimensional data classification on graphs which has been applied to several imaging problems[29, 44], showing good performance and robustness. We consider the problem of binary image segmentation where we want to partition a given image into two components where each component is a set of pixels (also called a cluster, or a class) and represents a certain object or group of objects. Typically, some a priori information describing the object(s) we want to extract is given and serves as initial input for the segmentation algorithm. For image labelling, in  two images are taken as input: the first one has been manually segmented in two classes and the objective is to automatically segment the second image using the information provided by the segmentation of the first one.
We revise in the following the main ingredients of the model considered and start from a quick review of concepts in graph theory. We represent a rectangular image with pixels by the set . For each , we define the image neighbourhood of as the set
with fixed, i.e. contains the pixels in a sized square centred at . For some appropriate , we associate to every pixel a vector encoding selected characteristics of the neighbourhood . These characteristics are related to the grey or RGB (red, green, blue) intensity values as well as the texture features of the neighbourhood. In Section 2.5, we will explain in more detail our feature vector construction. The map is called the feature function. For constructing the feature vectors in Section 2.5, it will be useful to associate a neighbourhood vector to each neighbourhood, such that the ordering of the in is consistent between pixels , e.g., order the pixels from each square from left to right and top to bottom. The specific choice of ordering is not important, as long as it is consistent for each pixel neighbourhood.
Next we construct a simple weighted undirected graph whose vertices correspond to the pixels in and with edges whose weights depend on the feature function . Let be a vertex set of cardinality . To emphasize that each vertex in corresponds to exactly one pixel in , we will label the vertex corresponding to by as well. Let be a symmetric and nonnegative function, i.e. for each
We define the edge set as the collection of all undirected edges connecting nodes and for which . The function restricted to is then a positive edge weight function.
In our applications we define as
where is a given function and is the feature function.
In operator form, the weight matrix is the nonnegative symmetric matrix whose elements are . In the following, we will not distinguish between the two functions and and, with a little abuse of notation, we will write for .
Weight functions express the similarities between vertices and will be used in the following to partition into clusters such that the sum of the edge weights between the clusters is small. There are many different mathematical approaches to attempt this partitioning. When formulated as a balanced cut minimisation, the problem is NP-complete , which inspired relaxations which are more amenable to computational approaches, many of which are closely related to spectral graph theory . We refer the reader to  for a monograph on the topic. The method we use in this paper can be understood (at least in spirit, if not technically, [65, 66]) as a nonlinear extension of the linear relaxed problems.
To solve the segmentation problem, we minimise a discrete GL functional (which is formulated in the graph setting, instead of the continuum setting), via a gradient descent method similar to the one described in Section 2.1. In particular, in this setting the Laplacian in (2.3) will be a (negative) normalised graph Laplacian. We will use the spectral decomposition of
with respect to the eigenfunctions of this Laplacian. In Section2.4 we discuss the Nyström method, which allows us to quickly compute this decomposition, but first we introduce the graph Laplacian and graph GL functional.
The discrete operators.
We start from the definition of the differential operators in the graph framework.
For each vertex , we define the degree of ,
In operator form, the diagonal degree matrix is defined to have diagonal elements .
A subset of the vertex set is connected if any two vertices in can be connected by a path (i.e. a sequence of vertices such that subsequent vertices are connected by an edge in ) such that all the vertices of the path are in . A finite family of sets is called a partition of the graph if for and .
We now have all the ingredients to define the graph Laplacian. Denoting by the space of all the functions , the graph Laplacian is the operator such that:
We are considering a finite graph of size , so real valued functions can be identified as vectors in . We can then write the graph Laplacian in matrix form as or element-wise as:
It is worth mentioning (see Remark 2.2 below) that this graph Laplacian is a positive semidefinite operator. Note that by convention the sign of the discrete Laplacian is opposite to that of the (negative semidefinite) continuum Laplacian. The associated quadratic form of is
The quadratic form can be interpreted as the energy whose optimality condition corresponds to the vanishing of the graph Laplacian in (2.6).
The operator in (2.5)-(2.6) is not the only graph Laplacian appearing in the literature. To set it apart from others, it is also referred to as the unnormalised or combinatorial graph Laplacian. Such operator can be related to the standard continuous differential one through nonlocal calculus . More precisely, the eigenvectors of converge to the eigenvectors of the standard Laplacian, but in the large sample size limit a proper scaling of is needed in order to guarantee stability of convergence to the continuum operator [10, 44]. Hence, we consider in the following the normalisation of given by the symmetric graph Laplacian
In [61, Section 5] a quick review on the connections between the use of the symmetric graph Laplacian (2.8) and spectral graph theory is given. Computing the eigenvalues of the normalised symmetric Laplacian corresponds to the computation of the generalised eigenvalues used to compute normalised graph cuts in a way that the standard graph Laplacian may fail to do, compare 
. Typically, spectral clustering algorithms for binary segmentation base the partition of a connected graph on the eigenvector corresponding to the second eigenvalue of the normalised Laplacian, using, for example,-means. For further details and a comparison with other methods we refer the reader to  and to [10, Section 2.3] where a detailed explanation on the importance of the normalisation of the Laplacian is given.
The discrete GL functional.
Here represents known training data provided by the user. As before, is the double-well potential. The function
is the characteristic function of the subset of labelled vertices, i.e. on and on . Hence, the corresponding fidelity term enforces the fitting between and in correspondence to the the known labels on the set , while the labelling for the pixels in is driven by the first two regularising terms in (2.2).
The corresponding gradient flow for (2.2) reads
The idea is to design a semi-supervised learning (SSL) approach wherea priori information for the set (i.e. cluster labels) is used to label the points in the set . The comparison uses the weight function defined in (2.4) to build the graph by comparing the feature vectors at each point.
Remark 2.3 (The weight function).
As pointed out in [10, Section 2.5], the main criteria driving the choice of the weight function are the desired outcome and the computational efforts required to diagonalise the corresponding matrix . A common weight function is the Gaussian function, which, for reads
Note that this function is symmetric: .
Several approaches to SSL using graph theory have been considered in literature, compare [22, 31]. The approach presented here adapts fast algorithms available for the efficient minimisation of the continuous GL functional to the minimisation of the discrete one in (2.2) . In particular, to overcome the high computational costs, we present in the following an operator splitting scheme and a matrix completion technique applied to our problem.
2.3 Convex splitting
Splitting methods are used in the study of PDEs. Here, we focus on convex splitting, which is used to numerically solve problems with a general gradient flow structure. Decomposing as
where both and are convex and denoting by the spatially discretisation of , , , a semi-implicit discretisation for the steepest descent of reads
where indicates formally the Fréchet derivative with respect to the metric in a Banach space . The advantage of the convex splitting consists in treating the convex part implicitly in time and the concave part explicitly. Typically, nonlinearities are considered in the explicit part of the splitting and their instability is balanced by the effect of the implicit terms.
where the constant has to be chosen large enough such that is convex for around the wells of . The differential operator contained in the implicit component of the splitting, , is the symmetric graph Laplacian, which can be diagonalised quickly and inverted using Fourier transform methods. In [10, Section 3.1], more details of the splitting are presented. Writing out in detail the time-discretised scheme (2.10), we get, for every
Here, denotes the training data, i.e. the known labels and assigned by the user to nodes in the subset . In our numerical experiments we initialised the time-stepping (2.12) by taking
Towards the numerical realisation.
The numerical strategy we intend to use is based on the following steps (see Section 2.5 for more details):
At each time step , consider at every point the spectral decomposition of with respect to the eigenvectors of the operator as
with coefficients . Similarly, use spectral decomposition in the basis for the other nonlinear quantities appearing in (2.12).
The only possible bottleneck of this strategy is the computation of the eigenvectors of the operator , which, in practice, can be computationally costly for large and non-sparse matrices . To mitigate this potential problem, we use the Nyström extension (Section 2.4).
2.4 Matrix completion via Nyström extension
Following the detailed discussion in [10, Section 3.2], we present here the Nyström technique for matrix completion  used in previous works by the graph theory community [28, 7] and applied later to several imaging problems [53, 45, 46]. In our problem, the Nyström extension is used to find an approximation of the eigenvectors of the operator . We will freely switch between the representation of eigenvectors (or eigenfunctions) as a real-valued functions on the vertex set and as a vectors in .
Consider a fully connected graph with vertices and the set of corresponding feature vectors . A vector is an eigenvector of the operator in (2.8) with eigenvalue if and only if is an eigenvector of the operator with eigenvalue , since
Thus, finding the spectral decomposition of boils down to diagonalising the operator . This is not obviously easier, as the matrix , despite being nonnegative and symmetric, may be large and non-sparse, so the computation of its spectrum may be computationally hard. Here, however, we take advantage of the Nyström extension. Given the eigenvalue problem
for every point
, we approximate the sum on the left hand side using a standard quadrature rule where the interpolation points are chosen by randomly selecting a subset ofpoints from the set and the interpolation weights are chosen correspondingly. The Nyström extension for (2.16) then approximates (2.16) by
where is a set of randomly chosen vertices. The set defines a partition of into and . In (2.17) we approximate the value , for an eigenvector of and , only knowing the values , by solving the linear problem
With this method we can approximate the values of an eigenvector of , corresponding to the eigenvalue , in the whole set of points using its values in the subset and solving the interpolated eigenvalue equation above. Generally, this is not as immediate as it sounds since the eigenvectors of are not known in advance, however, by choosing , , in (2.18), we find an eigenvalue problem for the known matrix with entries , which is a much smaller matrix than the full matrix :
If is small enough such that this eigenvalue problem can be solved, then and , , can be computed, which in turn can be substituted back into (2.18) to find an approximation to , for any . In short, we approximate the eigenvectors in (2.16) by extensions of the eigenvectors in (2.19), using the extension equation (2.18), and we approximate the eigenvalues in (2.16) by the eigenvalues from (2.19). The main Nyström assumption is that these approximated eigenvectors and eigenvalues approximately diagonalise . For further details on the Nyström method, we refer the reader to Appendix A where a description of the method is given in matrix notation.
We present here the pseudocode combining all the different steps described above for the realisation of the GL minimisation. We recall that is the scale parameter of the GL functional (2.2),
is the variance used in the Gaussian similarity function (2.9), is the convex splitting parameter in (2.11a)-(2.11b) and is the number of sample points in (2.17).
We will now give further details. First we randomly select pixels from . As described in Section 2.2 we now create a vertex set , which we partition into a set , consisting of the vertices corresponding to the randomly chosen pixels, and a set . We now compute the feature vectors of each vertex in . If is a grey scale image, we can represent features by an intensity map . If is an RGB colour image instead, we use a vector-valued (red, green, and blue) intensity map of the form . We mirror the boundary to define neighbourhoods also on the image edges. The feature function concatenates the intensity values in the neighbourhood of a pixel into a vector: , where is the neighbourhood vector of defined in Section 2.2 and , the size of the neighbourhood of . Note that if is a grey scale image and if is an RGB colour image.
Additional features can be considered, such as texture, for instance. For instance, we consider the eight MR8 filter responses  as texture features on a grey scale image and choose the function as . Hence, the feature function is now defined as where and are defined as above. Here, . Of course, a combination of colour and texture features can be considered as well by considering defined as for every in . In this case, when dealing with RGB colour images, the dimension of the feature vector is therefore .
Using (2.9), the Nyström extension can be performed for approximating the eigenvectors and eigenvalues of as described in Section 2.4 and in Appendix A, which are then used to compute the eigenvectors of and corresponding eigenvalues , compare (2.15). Recalling (2.14), those eigenvectors are used as basis functions for , the numerical approximation of in the -th iteration of the GL minimisation. Considering (2.12) and writing the nonlinear quantities appearing in terms of similarly as in (2.14), we have for
The computation of in the next iteration reduces to finding the coefficients in the expression
where is defined as .
3 Hough transform for scale detection
In order to detect objects in an image with specific, a priori specified shapes, in the following we will make use of the Hough transform. For our purposes, we will focus in particular on straight lines detection (for which the Hough transform was originally introduced and considered ) and circles, . Other applications of this transformation for more general curves exist as well. In [8, 42] the Hough transform is used in the context of astronomical and medical images for a specific class of curves (Lamet and Watt curves). In  applications to cellular mitosis are presented. There, the Hough transform recognises the cells (as circular/elliptical objects) and tracks them in the process of cellular division. For more details on the use of the Hough transform for line and circle detection we refer the interested reader to Appendix C.
Hough transform methods for edge detection are usually applied to binary images. Therefore, we start by using the classical Canny method for edge detection  in which we replace the original preliminary Gaussian filtering by an edge-preserving Total Variation smoothing  which has the advantage of removing noise while preserving edges. This step will result in a binary image for the most prominent edges in the image. Having decided which geometrical shape we are interested in (and, as such, its general parametric representation), the corresponding parameter space is subdivided into accumulator arrays (cells) whose dimension depends on the dimension of the parameter space itself (2D in the case of straight lines, 3D in the case of circles). Each accumulator array groups a range of parameter values. The accumulator array is initialised to and incremented every time an object in the parameter space passes through the cell. In this way, one looks for the peaks over the set of accumulator arrays as they indicate a high value of intersecting objects for a specific cell. In other words, they are indicators of potential objects having the specific geometrical shape we are interested in.
Numerically, dealing with the Hough transform consists of looking for peaks of the accumulator arrays in the parameter space onto which the original image is mapped. We use the MATLAB routines hough, houghpeaks, and houghlines for straight lines detection and imfindcircles for circle detection. The accuracy and the number of detections for such routines can be tuned by some parameters, such as, for instance, the maximum number of peaks one wants to consider, , or the array peak threshold, , i.e. the minimum number of elements for an accumulator array to be considered a peak. The user also has to specify an initial range of pixel values as a very rough approximation of the measurement scale. Namely, in the case of line detection this determines a minimum/maximum spacing between lines , whereas for circle detection this serves as a rough approximation of the range of values for the circles’ radii. This rough approximation may be given, for example, from average data which the user knows a priori. We explain this with some examples in Section 4. Accuracy of the detection algorithm is tuned by a parameter . In case of linear objects this corresponds to choose the maximum number of pixels between two line segments to consider them as one single line, whereas for circle detection this corresponds to the circularity of an object to be considered a circle.
4 Method, numerical results, and applications
We report in this section the numerical results obtained by the combination of the methods presented for the detection and quantitative measurement of objects in an image.
To avoid confusion, we will distinguish in the following between two different meanings of scale. Namely, by image scale we denote the proportion between the real dimensions (length, width) of objects in the image and their corresponding dimensions quantified in pixel count. Dealing with measurement tools, we talk about measurement scale to intend the ratio between a fixed unit of measure ( or ) marked the measurement tool considered and the correspondent number of pixels on the image.
4.1 Male pied flycatcher’s blaze segmentation
Here we present the numerical results obtained by applying algorithms 1 and 2 to the male pied flycatcher blaze segmentation and measurement problem described in the introduction. Our image database consists of 32 images of individuals from a particular population of flycatchers. Images are pixels and have been taken by a Canon 350D camera with Canon zoom lens EFD –, see Figure 1. In each image one of two types of measurement tool is present: a standard ruler or a surface on which two concentric circles of fixed diameter ( the inner one, the outer one) are drawn. In the following we will refer to these tools as linear and circular ruler, respectively. Here, the measurement scale corresponds to the distance between ruler marks for linear rulers and to the radius of the inner circles for circular rulers.
Figure 1 shows clearly that the scale of the images in the database may vary significantly because of the different positioning of the camera in front of the flycatcher. In order to study possible correlations between the dimensions (i.e. perimeter, area) of the blaze and significant behavioural factors, the task then is to segment the blaze and detect automatically the scale of the image considered to provide scale-independent measurements.
Parameter choice for Algorithm 1
The GL-segmentation method exploits similarities and differences between pixels in terms of RGB intensities and texture within their neighbourhood. In our image database these similarities and differences are very distinctive and will guide the segmentation step. Recalling Section 2.5, we note that some parameters need to be tuned for the graph GL minimisation. Those are the number of Nyström points, the variance of the similarity function (2.8), the GL parameter and the parameter for the convex splitting (2.11). However, in our numerical experiments we had to tune these parameters only once. Namely, regarding the choice of for both the head and blaze segmentation, we used values not bigger than of the total size of the image considered. The variance appearing in the similarity function (2.9) was set to and the weighting parameter was chosen as (a smaller choice would create numerical instabilities) and we set the convexity parameter or larger in order to guarantee the convexity of the functional appearing in (2.11b).
Parameter choice for Algorithm 2
: We briefly comment also on the choice of the parameters for the Hough transform, that is Algorithm 2. Depending on the type of measurement tool considered (linear or circular ruler), different parameter selection methods are considered. In the case of linear rulers: for the longest line detection (i.e. ruler edge identification) the parameters and were set to and to of the maximum value of the Hough transform matrix, respectively; for the detection of the ruler notches, the same parameters were chosen as and of the maximum value of the Hough transform matrix were used. As discussed in 3.1, the range was chosen based on previously collected average data on the head diameter. In particular, after the head detection step, the number of pixels corresponding to the diameter of the head was automatically computed by means of the option EquivDiam of the MATLAB routine regionprops and compared with the average measurement of provided by available databases on pied flycatcher. In this way, an initial, rough estimate of the ruler scale is found and used to determine a spacing parameter and the interval by setting and . This range serves as a suppression neighbourhood: once a peak in the Hough transform matrix (i.e. a line or a circle) is identified, starting from it the successive peaks found outside this range are set to (i.e. possible lines or circles within this interval are discarded), while only the ones inside the range (typically, the following line/circle we want to detect) are kept. For our problem, this corresponds to identifying as candidates for ruler notches only the lines away from each other at least and at most from the peak which has been previously identified. Analogously, the same can be done with circular rulers, where we recall the inner/outer radii are and , respectively. In this case since the circular ruler is made of only two concentric circles.
Comparison with Chan-Vese model.
Due to the very irregular contours and the fine texture on the flycatcher’s forehead, standard variational segmentation methods such as Canny edge detection or Mumford-Shah models, [52, 4, 11, 12], are not suitable for our task, as preliminary tests showed. Chan-Vese is not suitable either, mainly because of the small scale detection limits, the dependence on the initial condition, and the parameter sensitivity which may prevent us from an automatic and accurate segmentation of the tiny, yet characteristic feathers composing the blaze. In particular, the optimal parameters and appearing in the Chan-Vese functional and a sufficiently accurate initial condition need to be chosen typically by trial and error for every image at hand.
For comparison, we report in Figure 4 the blaze segmentation results obtained by using Chan-Vese model (see [19, 20]) and our graph based method which will be described in more detail in the following.
4.1.1 Detailed description of the method
We divide our task into different steps:
Starting from the reduced image, a second step similar to Step 1 is now performed for the segmentation of the blaze, using again Algorithm 1. A dictionary of blazes is used an extended set of features is considered.
A refinement step is now performed in order to reduce the outliers detected in the process of segmentation.
We use the Hough transform based Algorithm 2 to detect in the image objects with a priori known geometrical shape (lines for linear rulers, circles for circular rulers) for the computation of the measurement scale.
The final step is the measurement of the values we are interested in (i.e. the perimeter of the blaze, its area and the width and height of the different blaze components). In the case of linear rulers our results are given up to some error (due to the uncertainty in the detection of the measurement scale computed as average between ruler marks distances).
Figure 5 shows a diagram which outlines the workflow of our method.
In order to establish relations with behavioural and biological data confirming or contradicting the initial assumption of correlation between blaze size and higher attractiveness presented in the introduction , we have implemented a user ready program for the quantitative analysis and measurements of the size of the bird blazes which is currently used by the Department of Zoology of the University of Cambridge. The results of this study will be the topic of a forthcoming paper .
In the following we give more details about each step.
Step 1: Head detection.
We consider unlabelled images in the database and compare each of them with a dictionary of previously labelled images, see Figure 6. The training regions (i.e. the heads) are labelled with a value , the background with value . Unlabelled regions are initialised with value .
The main computational difficulties in this step are due to size of the images considered. This may affect the performance of the algorithm as in order to apply the Nyström completion technique described in Section 2.4 one has to choose an adequate number of points whose features will approximate well the whole matrix. The larger and more heterogeneous the image is, the larger will be the number of points needed to produce a sensible approximation. We circumvent this issue noticing that at this stage of the algorithm, we only need a rough detection of the head which will be used in the following for the accurate segmentation step. Thus, downscaling the image to a lower resolution (in our practice, reducing the resolution by ten times the original one) allows us to use a small number of Nyström sample points (typically –) to produce an accurate result.
For this first step we use as features simply the RGB intensities and proceed as described in Section 2.5. Once the head is detected, the resulting image is upscaled again to its original resolution. The solutions computed for the images in Figure 1 are presented in Figure 7.
Step 2: Blaze segmentation.
We consider now the reduced image from which we want to extract the flycatcher’s blaze. Again, a dictionary of different blazes is manually created by the user (see Figure 8). Again, training regions (the blazes) are labelled with value and the black feathers in the background with value . As before, unlabelled regions are initialised with value . At this stage, RGB intensities alone are not enough to differentiate the blazes from the background consistently in a large number of bird images, due to the colour difference between different blazes. For this step, an additional feature to be considered is the texture of the blaze. For this purpose, we use the MR8 texture features presented in  and proceed as detailed in Section 2.5. For neighbourhoods, the feature vector for each pixel will be an element in , see Section 2.5. The Ginzburg-Landau minimisation provides the segmentation results shown in Figure 9.
Step 3: Segmentation refinement.
This step uses very simple morphological operations in order to remove false detections obtained after Step 2. These can occur due to the choice of colour-texture based features used to compute the feature vectors in Step 2. For instance, when looking at Figure 9 (right) we observe that some bits on the left pied flycatcher’s cheek have been detected as they exhibit similar texture properties as the ones on the blaze. In order to prevent this, our software asks the user to confirm whether the segmentation result provided is the expected one or if there are additional unwanted regions detected. If that is the case, using the MATLAB routine bwconncomp we label all the connected components segmented in the previous step, discarding among them all the ones whose area is smaller than a fixed percentage (we use ) of the largest detected component (presumably, the blaze). This works well in practice, see Figure 10. If the user is not satisfied he or she can remove manually the unwanted regions. Figure 11 shows some blaze segmentation results after the refinement step.
Remark 4.1 (Robustness to noise).
In order to reproduce the more realistic situation of images suffering from noise, we artificially added Gaussian noise with zero mean and different variances to some of the images in our database and performed the three analysis steps of our method. We report in Figure 12 the results corresponding to two noise variances (, ). The presence of noise influences both the head and blaze segmentation only slightly; the combination of RGB and texture features extracted in the neighbourhood of each point combined with the comparison to the dictionary make the algorithm robust to noise and allows for an accurate blaze segmentation even in the noisy case.
). The presence of noise influences both the head and blaze segmentation only slightly; the combination of RGB and texture features extracted in the neighbourhood of each point combined with the comparison to the dictionary make the algorithm robust to noise and allows for an accurate blaze segmentation even in the noisy case.
Remark 4.2 (Comparison with MBO segmentation).
We compare the blaze segmentation results obtained by minimising the discrete GL functional with the ones obtained using the segmentation algorithm considered in  as a variant of the classical Merriman-Bence-Osher (MBO) scheme . More details on the connections between this approach and the GL minimisation as well as some insights on its numerical realisation are given in Appendix B. Following faithfully what is described in Section 2.2 and 2.4 for the graph and the operator construction step, respectively, we implemented the MBO segmentation algorithm following [45, Section 2]. We remark that the MBO method has the advantage of eliminating the dependence on the interface parameter of the GL functional by means of a combination of heat diffusion and a thresholding step. Instead of the heat diffusion time needs to be chosen. In our numerical implementation we used . Since no convex splitting strategies are required in this case, due to the absence of the non-convex double-well term, standard Fourier transform methods are used to solve the resulting time-stepping scheme. In Figure 13 we report the blaze segmentation results obtained after applying a refinement step similar to the one described above: we note that a segmentation result comparable to the ones shown in Figure 11 is obtained. Moreover, robustness to noise is observed also in this case. In terms of computational times, we observed that the replacement of the GL minimisation step with the MBO one did not affect significantly the speed of the segmentation algorithm.
Step 4: Measurement scale detection.
The images in our database divide into two groups: the first is characterised by the presence of linear rulers, whereas the second contains circular rulers (Figure 1). We thus need to use the Hough transform based Algorithm 2 to detect lines or circles, respectively. The user is then required to tell the software which objects he or she wants to detect. In both cases, in order to avoid false detections (such as “aligned” objects erroneously detected as lines, or circle-like objects wrongly considered as circles, see Figure 14), a good candidate for a rough, sensible approximation of the measurement scale is needed as described in Section 3.1. In order to get this, we proceed as follows: after detecting the head as in Step 1, we use the option EquivDiam of the MATLAB routine regionprops to detect the diameter of the head region (in pixels). We then compare such measurement with pre-collected average measurements of head diameters of male pied flycatchers of a similar population (in ), thus obtaining an initial approximation of the measurement scale. In the case of images containing linear rulers, this will serve as a spacing parameter for the algorithm. In other words, only lines distant at least pixels from each other will be considered. In the case of circular rulers, the same rough approximation will serve similarly as an indication of the range of values in which the Hough transform based MATLAB function imfindcircles will look for circles’ radii. For linear ruler images, the algorithm will look only for parallel lines aligned with a prescribed direction. We set this direction as the one perpendicular to the longest line in the image (since the expectation is, that this longest line is the edge of the ruler). Results of this step are shown in Figure 15.
Outliers removal for linear rulers.
The scale detection step described above may miss some lines on the ruler. This can be due to an oversmoothing in the denoising step, to high threshold values for edge detection or also to the choice of a large spacing parameter. Furthermore, as we can see from Figures 1 and 15, we can reasonably assume that the ruler lies on a plane, but its bending can distort some distances between lines. Moreover, few other false line detections can occur (like the number marked on the ruler main body in Figure 15). To exclude these cases, we compute the distance (in pixels) between all the consecutive lines detected and eliminate possible outliers using the standard interquartile range () formula  for outliers’ removal. Indicating by and
the lower quartile and the third quartile, an outlier is every element not contained in the interval
. Finally, we compute the empirical mean, variance and standard deviation (SD) of the values within this range, thus getting a final indication of the scale of the ruler together with an indicator of the precision of the method.
Step 5: Measurement.
Once the measurement scale has been detected, it is easy to get all the required measurements. We are interested in the perimeter, the area of the blaze and also in the height and width of the whole blaze component. For linear rulers, due to the error committed in the scale detection step, these values present some uncertainty and variability (see above). In Table 1 we show the results of numerical tests on a sample of images with linear rulers. For every image in the sample we compute the standard deviation (SD) error and report in the table the minimum, maximum, and average SD error over the single ones compute, together with the relative standard deviation (RSD) which gives a percentage indication of the error committed.
where is the sample SD and is the sample mean of measurements. We observe a minimum and maximum SD of and pixels, respectively, which, compared to the dimension of the original image ( pixels) suggests a reasonable precision. This is confirmed by the average SD value over the sample which is found to be pixels. In percentage, the average error over the sample is . For circular rulers, we observed in all our experiments that an initial approximation of the range of values for the circle radius (see Step 4 above) results in a robust and typically outlier-free detection of the circular ruler and consequently in an accurate measurement of its radius; the only possible cause of variability and error is its bending.
Uncertainty in the measurements of lengths and areas is calculated with standard formulas in propagation of errors.
|SD min||SD max||mean SD||RSD min||RSD max||mean RSD|
Despite these variabilities, our method is a flexible and semi-supervised approach for this type of problem. Further tests on the whole set of images and improvements on its accuracy are a matter of future research. The analysis of the resulting data measurements for the particular problem of flycatchers’ blaze segmentation will be the topic of the forthcoming paper .
We compare in Table 2 between the use of our combined approach and the use of the manual line tool of the IMAGEJ software for the measurement of the blaze area. Namely, we measured in Figure 0(b) and in Figure 0(c) the ruler scale by means of the IMAGEJ line tool by considering, for each image, two different -sections of the ruler; we then measured manually the number of pixels contained in each, divided each measurement by and averaged the two results to obtain an estimate of the ruler scale (i.e. the number of pixels crossed by a 1 mm horizontal or vertical line segment). We then measured the area of the blaze after segmenting it by means of the ‘magic-wand’  IMAGEJ tool and trapezium fitting  (see Figure 2). The results are reported in Table 2 both as number of image pixels inside the blaze and in , where this second value has been calculated using the measurement scale detected as described above. We then repeated such measurements using our fully automated Hough transform method for ruler scale detection, reporting as above the measurements of the blaze area computed both as number of image pixels and in . We observe a good level of accuracy of our combined method (see also Table 1) with respect to the ‘magic-wand’ manual approach of Moreno , while, unsurprisingly, the blaze measurements obtained by pure trapezium fitting as proposed by Potti and Montalvo in  tend to overestimate the area of the blaze.
|Scale (# pixels = 1mm)||Blaze area (pixel count)||Blaze area ()|
|Manual||HT (Ours)||MW||Trap.||GL (Ours)||MW||Trap.||GL (Ours)|
4.2 Moles monitoring for melanoma diagnosis and staging
In this section we focus on another application of the scale detection Algorithm 2 in the context of melanoma (skin cancer) monitoring, see Figure 3. Early signs of melanoma are sudden changes in existing moles and are encoded in the mnemonic ABCD rule. They are Asymmetry, irregular Borders, variegated Colour and Diameter ‡‡‡ Prevention: ABCD’s of Melanoma. American Melanoma Foundation, http://www.melanomafoundation.org/prevention/abcd.htm.. In the following we focus on the D sign.
Due to their dimensions and their irregular shapes, moles are often very hard to measure. Typically, a common dermatological practice consists in positioning a ruler under the mole and then taking a picture with a professional camera. Sudden changes in the evolution of the mole are then observed by comparison between different pictures taken over time. Hence, their quantitative measurement may be an indication of a malignant evolution
In the following examples reported in Figure 16, we use the graph segmentation approach described in algorithm 1 where texture-characteristic regions are present (see Figure 15(a)) and the Chan-Vese model  for images characterised by homogeneity of the mole and skin regions and the regularity of mole boundaries (Figures (15(b))-(15(c))). For the numerical implementation, we use the freely available online IPOL Chan-Vese segmentation code . Let us point out here that previous works using variational models for accurate melanoma segmentation already exist in literature, see [18, 1], but in those no measurement technique is considered.
4.3 Other applications: animal tracks and archeological finds’ measurement
We conclude this section presenting some other applications for the combined segmentation and scale detection models presented above.
The first application is the identification and classification of animals living in a given area through their soil, snow and mud footprints. Their quantitative measurement is also interesting in the study of the age and size a of a particular animal species. As in the problems above, such measurement very often reduces to a very inaccurate measurement performed with a ruler placed next to the footprint image. In Figure 16(a)§§§Image from http://mamajoules.blogspot.co.uk/2015/01/a-naturalists-thoughts-on-animal-tracks.html. our combined method is applied for the measurement of a white-tailed deer footprint.
As a final application, we focus on archaeology. In many archaeological finds, objects need to be measured for comparisons and historical studies . Figure 16(b) shows the application of our method to coin measurements. Due to its circular shape, for this image a combined Hough transform method for circle and line detection has been used. The example image is taken from  where the authors propose a gradient threshold based method combined with a Fourier transform approach. Despite being quite efficient for the particular applications considered, such approach relies in practice on the good experimental setting in which the image is taken: almost noise-free images and very regular objects with sharp boundaries (mainly coins) and homogeneous backgrounds are considered. Furthermore, results are reported only for rulers with vertical orientation and no bending.
In this paper we consider image segmentation applications involving measurement of a region’s size, which has applications in several disciplines. For example, zoologists may be interested in quantitative measurements of some parts of the body of an animal, such as distinctive regions characterised by specific colours and texture, or in animal tracks to differentiate between individuals in the species. In medical applications, quantifying an evolving, possibly malignant, mass (like, for instance, skin melanoma) is crucial for an early diagnosis and treatment. In archaeology, finds need to be measured and classified. In all these applications, often a common measurement tool is juxtaposed to the region of interest and its measurement is simply read directly from the image. This practice is typically inaccurate and imprecise, due to the conditions in which pictures are taken. There may be noise corrupting the image, the object to be measured may be hard to distinguish, and the measurement tool can be misplaced and far from the object to measure. Moreover, the scale of the image depends on the image itself due to the varying distance from the camera of the ruler and objects to measure.
The method presented (based on ) consists of a semi-supervised approach which, by training the algorithm with some examples provided by the user, extracts relevant features from the training image (such as RGB intensities, texture) and uses them to detect similar regions in the unknown image. Mathematically, this translates into the minimisation of the discrete Ginzburg-Landau functional defined on graphs. To overcome the computational issues due to the size of the data, Nyström matrix completion techniques are used and for the design of an efficient numerical scheme, convex splitting is applied. The measurement scale detection task is performed by using the Hough transform, a geometrical transformation which is capable of detecting objects with a priori known geometrical shapes (like lines on a ruler or circles with fixed diameter). Once the measurement scale is detected, all the measurements are converted into a unit of measure which is not image-dependent.
Our method represents a systematic and reliable combination of segmentation approaches applied to several real-world image quantification tasks. The use of dictionaries, moreover, allows for flexibility as, whenever needed, the training database can be updated. With respect to recent developments 
in the fields of data mining for the analysis of big data, where predictions are often performed using training sets and clustering, our approach represents an interesting alternative to standard machine learning (such as-means) algorithms.
Many thanks to Colm Caulfield who has introduced HMR and the bird segmentation problem to us mathematicians and to Andrea Bertozzi for her very useful comments on the manuscript. LC acknowledges support from the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis (CCA). CBS acknowledges support from EPSRC grants Nr. EP/J009539/1 and EP/M00483X/1. Moreover, this project has been supported by King Abdullah University of Science and Technology (KAUST) Award No. KUK-I1-007-43. HMR is currently supported by an Institute Research Fellowship at the Institute of Zoology, by the Department of Zoology at the University of Cambridge, and Churchill College, Cambridge.
Appendix A The Nyström extension
Let us define first the sub matrices and as
Analogous definitions hold for and . Each of these matrices represents the sub matrix having as elements the weights between the points in , or between the two sets. With this notation, the whole matrix can be written in block-form as
Similarly, vectors can be written as . We focus on the spectral decomposition of the first block of , that is . Since this matrix is symmetric, calling the matrix containining the eigenvalues of , then by the spectral theorem (compare with (2.19)), with be the orthogonal matrix having as columns the eigenvectors of
be the orthogonal matrix having as columns the eigenvectors of. Writing (2.18) for , in operator form, we obtain as
The approximated eigenvectors of can then be written in matrix form as
Let us observe that
Therefore, Nyström extension can be interpreted as the approximation , under the approximation of given by The quality of the approximation of the full is quantified by the norm of the Schur complement , see 
Recalling the definition of the symmetric graph Laplacian given by (2.8) and the relation between the spectral decomposition of and the one of in (2.15), we observe that a normalisation step now needs to be computed for obtaining the spectral decomposition of . Defining as the -dimensional vector consisting of ones and analogously, we use (A) and start computing the nonnegative vector by
Therefore, the matrices and can be normalised simply by considering:
where the division is intended element-wise and is the standard vector tensor product.
is the standard vector tensor product.
A further step of normalisation is now needed since the approximated eigenvectors of , i.e. the columns of the matrix in (A.2) may not be orthogonal. Such normalisation may be obtained by using auxiliary unitary matrices. We refer the reader to [10, Section 3.2] for more details on this.
Once these additional normalisation steps are completed, we then get a spectral decomposition of in terms of its eigenvalues and the corresponding normalised eigenvectors . Therefore, recalling (2.15), the spectral decomposition of is given in terms of the eigenvalue and eigenvectors .
Appendix B The MBO scheme for image segmentation
which has often been studied for the modelling of several phase transition and separation problems and for the study of mean curvature flow (see, e.g., ). In the limit solutions consist of two phases corresponding to the wells of . In  it is shown that, for rescaled solutions of equation (B.1), the interface between these phases evolves according to mean curvature flow. In , Merriman, Bence and Osher propose an alternative approach (later named MBO scheme) which, by using threshold dynamics, approximates the mean curvature flow of the interface at discrete times. As proved rigorously in , for small values of the interface parameter , the MBO scheme can then be used to solve equation (B.1) numerically.
In , the authors propose a variant of the MBO scheme as an alternative way to (approximately) minimise the graph GL functional with fidelity term, (2.2). Recalling the graph framework introduced in Section 2.2, the MBO segmentation starts from an initialisation given by (2.13) and computes, for every the new iterate from by applying sequentially the two following steps:
Step 1 (diffusion with forcing term): Starting from , solve for every the discretised heat diffusion equation with fidelity term
where is the heat diffusion time and is the number of diffusion steps. Practically, has to be chosen small enough to approximate the motion by mean curvature and large enough to avoid freezing or pinning, which occurs when the diffusion time is so short that not enough mass diffuses along the edges of the network and the thresholding operation described in the following Step 2 leaves unchanged.
Step 2 (thresholding): For every point set as:
Numerically, (B.2) is solved at each diffusion time step by considering the spectral decomposition of with respect of the eigenvectors of the operator , similarly as in (2.14), and using classical Fourier transform methods to compute the new iterate .
Appendix C The Hough transform
The general idea behind the use of the Hough transform [36, 24] is to map the ambient space to an auxiliary space, called the parameter space (as it is related to the parametric representation of the geometrical objects we are interested in). There, objects with specified shapes are easily recognisable as peaks of specific functions. Let us clarify these concepts with two examples.
Detecting line segments.
We start from the typical slope-intercept form of a line:
Traditionally, the equation above is considered as a function of points with coordinates satisfying equation (C.1) for fixed values of and . In other words, these values identify a specific straight line in the - plane, cf. Figure 17(a). Rewriting (C.1) as and keeping fixed the coordinates we obtain a new equation of a straight line in the - plane, cf. Figure 17(b), depicting the parameter space. If lines in the - parameter space intersect, their sign-changed slopes (given by their values) and -intercepts (their values) correspond to points lying on the same line in the - plane. The coordinates of the intersection point in parameter space specify the slope and -intercept respectively of that line in the - plane.
Hence, if we are given a black and white image in the - plane, and for all coordinates of black locations in the image, we draw the corresponding lines in the - plane, intersection points of those lines will tell us which