1 Introduction
Graph models have been used in image analysis for a long time. The edited book [48] gives an overview of methods in this field. However, approaches from quantitative graph theory such as graph indices have not played a significant role in these applications so far. This is to some extent surprising as it is not a farfetched idea to model information contained in small patches of a textured image by graphs, and once this has been done, graph indices with their ability to extract in a quantitative form structural information from large collections of graphs lend themselves as a promising tool specifically for texture analysis. A first step in this direction has been made in [75] where a set of texture descriptors was introduced that combines a construction of graphs from image patches with wellknown graph indices. This set of texture descriptors was evaluated in [75] in the context of a texture discrimination task. In [76], an example for texturebased image segmentation was presented based on this work.
The present paper continues the work begun in [75] and [76]. Its purpose is twofold. On one hand, it restates and slightly extends the experimental work from [76] on texture segmentation, focussing on those descriptors that are based on entropy measures, which turned out particularly useful in the previous contributions. On the other hand, it undertakes a first attempt to analyse the graphindex based texture descriptors with regard to what kind of information they actually extract from a textured image.
Structure of the paper.
The remaining part of Section 1 briefly outlines the fields of research that are combined in this work, namely quantitative graph theory, see Section 1.1, graph models in image analysis with emphasis on the pixel graph and its edgeweighted variant, see Section 1.2, and finally texture analysis, Section 1.3. In Section 2 the construction of graphentropybased texture descriptors from [75] is detailed. Section 3
gives a brief account of the geodesic active contour method, a wellestablished approach for image segmentation that is based on the numerical solution of a partial differential equation. Texture segmentation combining the graphentropybased texture descriptors with the geodesic active contour method is demonstrated on two synthetic examples that represent typical realistic texture segmentation tasks, and a realworld example in Section
4. Some theoretical analysis is presented in Section 5 where (one setup of) graphentropybased texture descriptors is put into relation with fractal dimension measurements on a metric space derived from the pixel graph, and thus a connection is established between graph entropy methods and fractalbased texture descriptors. A short conclusion, Section 6, ends the paper.1.1 Quantitative Graph Theory
Quantitative measures for graphs have been developed for almost sixty years in mathematical chemistry as a means to analyse molecular graphs [6, 38, 41, 62, 79]. In the course of time, numerous graph indices have been derived based on edge connectivity, vertex degrees, distances, and informationtheoretic concepts, see e.g. the work [25]
that classifies over
descriptors from literature and subjects them to a largescale statistical evaluation on several test data sets. Recently, interesting new graph indices based on the socalled Hosoya polynomial have been proposed [27]. Fields of application have diversified in the last decades to include e.g. biological and social networks, and other structures that can mathematically be modelled as graphs, see [24]. Efforts to apply statistical methods to graph indices across these areas have been bundled in the emerging field of quantitative graph theory [22, 24].Many contributions in this field group around the tasks of distinguishing and classifying graphs, and quantifying the differences between graphs. The first task focusses on the ability of indices to uniquely distinguish large sets of individual graphs, termed discrimination power [5, 26, 27]. For the latter task, inexact graph matching, the graph edit distance [32, 69] or other measures quantifying the size of substructures that are shared or not shared between two graphs are of particular importance, see also [18, 21, 23, 30, 63, 74, 80]. The concept of discrimination power has to be complemented for this purpose by the principle of smoothness of measures, see [31], which describes how similar the values of a measure are when it is applied to similar graphs. In [33], the quantitative measures of structure sensitivity and abruptness have been introduced in order to precisely analyse the discrimination power and smoothness of graph indices. These measures are based on the average and maximum, respectively, of the changes of graph index values when the underlying graph is modified by one elementary edit step of the graph edit distance.
Discrimination of graph structures by graph indices is also a crucial part of the texture analysis approach discussed here. Thus, discrimination power and the related notions of high structure sensitivity and low abruptness matter also in the present context. However, unique identification of individual graphs is somewhat less important in texture analysis than when examining single graphs as in [27, 33], as in texture analysis one is confronted with large collections of graphs associated with image pixels, and one is interested in separating these into a small number of classes representing regions. Not only will each class contain numerous graphs, but also the spatial arrangement of the associated pixels is to be taken into account as an additional source of information, as segments are expected to be connected.
1.2 Graph Models in Image Analysis
As can be seen in [48], there are several ways in which image analysis can be linked to graph concepts. A large class of approaches is based on graphs in which the pixels of a digital image take the role of vertices, and the set of edges is based on neighbourhood relations, with  and neighbourhoods as most popular choices in 2D, and similar constructions in 3D, see [47, Section 1.5.1]. To imprint actual image information on such a graph, one can furnish it with edge weights that are based on image contrasts. Among others, the graph cut methods [39] that have recently received much attention for applications such as image segmentation [40, 71] and correspondence problems [7, 66] make use of this concept. This setup is also central for the work presented here, see the more detailed account of the pixel graph and edgeweighted pixel graph of an image in Section 2.1 of the present paper.
Generalising the pixelgraph framework, the graph perspective allows to transfer image processing methods from the regular mesh underlying standard digital images to nonregular meshes that can be related to scanned surface data [13] but arise also from standard images when considering nonlocal models [10] that have recently received great attention in image enhancement. Graph morphology, see e.g. [55], is one of these generalisations of image processing methods to nonregular meshes, but also variational and PDE frameworks have been generalised in this way [29].
We briefly mention that graphs can also be constructed, after suitable preprocessing, from vertices representing image regions, see [47, Section 1.5.2], opening avenues to highlevel semantic image interpretation by means of partition trees. Comparison of hierarchies of semantically meaningful partitions can then be achieved e.g. using graph edit distance or related concepts [32].
Returning to the pixelgraph setup which we will also use in this work, see Section 2.1, we point out a difference of our approach to those that represent the entire image in a single pixel graph. We focus here on subgraphs related to small image patches, thus generating large sets of graphs whose vertex sets, connectivity and/or edge weights encode local image information. To extract meaningful information from such collections, statistical methods such as entropybased graph indices are particularly suited.
1.3 Texture
In image analysis, the term texture refers to the smallscale structure of image regions, and as such it has been an object of intensive investigation since the beginnings of digital image analysis. For example, [36, 37, 64, 73, 81] undertook approaches to define and analyse textures.
Complementarity of texture and shape.
Realworld scenes often consist of collections of distinct objects which in the process of imaging are mapped to regions in an image delineated by more or less sharp boundaries. While the geometric description of region boundaries is afforded by the concept of shape and typically involves largescale structures, texture represents the appearance of the individual objects, either their surfaces if images in the case of reflection imaging (such as photography of opaque objects), or their interior if transmissionbased imaging modalities (such as transmission microscopy, Xray, magnetic resonance) are being considered. Texture is then expressed in the distribution of intensities and their shortscale correlations within a region. A frequently used mathematical formulation of this distinction is the cartoontexture model that underlies many works on image restoration and enhancement, see e.g. [56]. In this approach, (spacecontinuous) images are described as the sum of two functions: a cartoon component from the space of functions of bounded variation, and a texture component from a suitable Sobolev space. In a refined version of this decomposition [1], noise is modelled as a third component assigned to a different function space.
Note that also in image synthesis (computer graphics) the complementarity of shape and texture is used: Here, textures are understood as intensity maps that are mapped on the surfaces of geometrically described objects.
The exact frontier between shape and texture information in a scene or image, however, is modeldependent. The intensity variation of a surface is partly caused by geometric details of that surface. With a coarsescale modelling of shape, smallscale variations are included in the texture description, whereas with a refined modelling of shape, some of these variations become part of the shape information. For example, in the texture samples shown in Figure 1(a) and (b), a geometric description with sufficiently fine granularity could capture individual leaves or blossoms as shapes, whereas the largescale viewpoint treats the entire ensemble of leaves or blossoms as texture.
Texture models.
Capturing texture is virtually never possible on the basis of a single pixel. Only the simplest of all textures, homogeneous intensity, can be described by a single intensity. For all other textures, intensities within neighbourhoods of suitable size (that differs from texture to texture) need to be considered to detect and classify textures. Moreover, there is a large variety of structures that can be constitutive of textures, ranging from periodic patterns in which the arrangement of intensity values follows strict geometric rules, via nearperiodic and quasiperiodic structures to irregular patterns where just statistical properties of intensities within a neighbourhood are characteristic of the texture. The texture samples in Figure 1(a) and (b) are located rather in the middle of the scale where both geometric relations and statistics of the intensities are characteristic of the texture; nearperiodic stripe patterns as in the later examples, Figures 4 and 5, are more geometrically dominated.
With emphasis on different categories of textures within this continuum, numerous geometric and statistic approaches have been made over the decades to describe textures. For example, frequencybased models [34, 44, 68] emphasise the periodic or quasiperiodic aspect of textures. Statistics on intensities such as [64] mark the opposite end of the scale, whereas models based on statistics of image derivative quantities like gradients [37]
or structure tensor entries
[9] attempt to combine statistical with geometrical information. A concept that differs significantly from both approaches has been proposed in [49] where textures are described generatively via grammars.Also fractals [52] have been proposed as a means to describe, distinguish and classify textures. Remember that a fractal is a geometric object, in fact a topological space, for which it is possible to determine, at least locally, a Minkowski dimension (or, almost identical, Hausdorff dimension) which differs from its topological dimension. Assume that the fractal is embedded in a surrounding Euclidean space, and it is compact. Then it can be covered by a finite number of boxes, or balls, of prescribed size. When the size of the boxes or balls is sent to zero, the number of them which is needed to cover the structure grows with some power of the inverse box or ball size. The Minkowski dimension of the fractal is essentially the exponent in this power law. The Minkowski dimension of a fractal is often noninteger (which is the reason for its name), however, a more precise definition is that Minkowski and topological dimensions differ, which also includes cases like the Peano curve whose Minkowski dimension is an integer () but still different from the topological dimension (). See also [2, 4] for different concepts of fractal dimension.
Textured images can be associated with fractals by considering the image manifold, i.e. the function graph if the image is modelled as a function over the image plane, which is naturally embedded in a product space of image domain and the range of intensity values. For example, a planar greyvalue image has the image manifold . The dimension of this structure can be considered as a fractalbased texture descriptor. This approach has been stated in [58] where fractal dimension was put into relation with image roughness and the roughness of physical structures depicted in the image. Many works followed this approach, particularly in the 1980s and beginning 1990s when fractals were under particularly intensive investigation in theoretical and applied mathematics. In [2] several of these approaches are reviewed. An attempt to analyse fractal dimension concepts for texture analysis is found in [72]. The concept has also been transferred to the analysis of 1D signals, see [53, 54]. During the last two decades the interest in fractal methods has somewhat reduced but research in the field remains ongoing as can be seen from more recent publications, see e.g. [61] for signal analysis, [12, 43] for image analysis with application in material science. With regard to our analysis in Section 5 that leads to a relation between graph methods and fractals, it is worth mentioning that already [19] linked graph and fractal methods in image analysis, albeit not considering texture but shape description.
Texture segmentation.
The task of texture segmentation, i.e. decomposing an image into several segments based on texture differences, has been studied for more than forty years, see [64, 73, 81]. A great variety of different approaches to the problem have been proposed since then. Many of these combine generic segmentation approaches, that could also be implemented for merely intensitybased segmentation, with sets of quantitative texture descriptors that are used as inputs to the segmentation. For example, [9, 57, 67, 68] are based on active contour or active region models for segmentation, whereas [35] is an example of a clusteringbased method. Nevertheless, texture segmentation continues to challenge researchers; in particular, improvements on the side of texture descriptors are still desirable.
Note that the task of texture segmentation involves a conflict: On one hand, textures cannot be detected on singlepixel level, necessitating the inclusion of neighbourhoods in texture descriptor computation. On the other hand, the intended output of a segmentation is a unique assignment of each pixel to a segment, which means to fix the segment boundaries at pixel resolution. To allow sufficiently precise location of boundaries, texture descriptors should therefore not use larger patches than necessary to distinguish the textures present in an image.
The content of this paper is centred around a set of texture descriptors that have been introduced in [75] based on graph representations of local image structure. This model seems to be the first that exploited graph models in discrimination of textures. Note that even texture segmentation approaches in literature that use graph cuts for the segmentation task use nongraphbased texture descriptors to bring the texture information into the graphcut formulation, see e.g. [71]. Our texture segmentation approach that was already shortly demonstrated in [76] integrates instead graphbased texture descriptors into a nongraphbased segmentation framework, compare Section 3.
2 GraphEntropyBased Texture Descriptors
Throughout the history of texture processing, quantitative texture descriptors have played an important role. Assigning a tuple of numerical values to a texture provides an interface to established image processing algorithms that were originally designed to act on intensities, and thereby to devise modular frameworks for image processing tasks that involve texture information.
Following this modular approach, we will attack the texture segmentation task by combining a set of texture descriptors with a wellestablished image segmentation algorithm. In this section we will introduce the texture descriptors whereas the following section will be devoted to describing the segmentation method.
Given the variety of different textures that exist in natural images, it cannot be expected that one single texture descriptor will be suitable to discriminate arbitrary textures. Instead, it will be sensible to come up with a set of descriptors that complement each other well in distinguishing different kinds of textures. To keep the set of descriptors at a manageable size, the individual descriptors should nevertheless be able to discriminate substantial classes of textures. On the other hand, it will be useful both for theoretical analysis and for practical computation if the set of descriptors is not entirely disparate but based on some powerful common concept.
In [75] a family of texture descriptors was introduced based on the application of several graph indices to graphs representing local image information. In combining six sets of graphs derived from an image, whose computation is based on common principles, with a number of different but related graph indices, this family of descriptors is indeed built on a common concepts.
The descriptors were evaluated in [75] in a simple texture discrimination task, and turned out to yield results competitive with Haralick features [36, 37], a wellestablished concept in texture analysis. In this comparison, graph indices based on entropy measures stood out by their texture discrimination rate.
In the following, we recall the construction of texture descriptors from [75], focussing on a subset of the descriptors discussed there. The first step is the construction of graphs from image patches. In the second step, graph indices are computed from these graphs.
2.1 Graph Construction
A discrete greyscale image is given as an array of real intensity values sampled at the nodes of a regular grid. The nodes are points in the plane where , . The spatial mesh sizes and are often assumed to be in image processing, which we will do also here for simplicity. Denoting the intensity values by and assuming that , , the image is then described as the array .
The nodes of the grid, thus the pixels of the image, can naturally be considered as vertices of a graph in which neighbouring pixels are connected by edges. We will call this graph the pixel graph of the image. Two common choices for what pixels are considered as neighbours are based on neighbourhoods, in which pixel has the two horizontal neighbours and the two vertical neighbours , or neighbourhoods, in which also the four diagonal neighbours are included in the neighbourhood. Whereas the neighbourhood setting leads to a somewhat simpler pixel graph (particularly, it is planar), whereas neighbourhoods are better suited to capture the geometry of the underlying (Euclidean) plane. In this work, we will mostly use neighbourhoods. See [47, Sec. 1.5] for more variants of graphs assigned to images.
We upgrade the pixel graph to an edgeweighted pixel graph by defining edge weights for neighbouring pixels via
(1) 
i.e. an sum of the spatial distance of grid nodes (where denotes the Euclidean norm), and the contrast of their corresponding intensity values, weighted by a positive contrast scale . This construction can of course be generalised by replacing the Euclidean norm in the image plane, and the sum by other norms. With various settings for these norms, it has been used e.g. in [45, 46, 77, 78] to construct larger spatially adaptive neighbourhoods in images, socalled morphological amoebas. See also [75] for a more detailed description of the amoeba framework in a graphbased terminology.
All graphs that will enter the texture descriptor construction are derived from the pixel graph or the edgeweighted pixel graph of the image. First, given a pixel and a radius , we define the Euclidean patch graph as the subgraph of which includes all nodes with Euclidean distance . In this graph, image information is encoded solely in the edge weights.
Second, we define the adaptive patch graph as the subgraph of which includes all nodes for which contains a path from to with total weight less or equal . In the terminology of [45, 46, 77, 78], the node set of is a morphological amoeba of amoeba radius around , which we will denote by . Note that the graph encodes image information not only in its edge weights, but also in its node set .
One obvious way to compute is by Dijkstra’s shortest path algorithm [28] with as starting point. A natural byproduct of this algorithm, which is not used in amoebabased image filtering as in [45] etc., is the Dijkstra search tree, which we denote as . This is the third candidate graph for our texture description. Image information is encoded in this graph in three ways: in the edge weights, the node set, and the connectivity of the tree.
Dropping the edge weights from , we obtain an unweighted tree which still encodes image information in its node set and connectivity. Finally, a Dijkstra search tree and its unweighted counterpart can be obtained by applying Dijkstra’s shortest path algorithm within the Euclidean patch graph . Whereas encodes image information in the edge weights and connectivity, does so only in the connectivity.
Applying these procedures to all pixels of a discrete image , we have therefore six collections of graphs which represent different combinations of three cues to local image information (edge weights, node sets, connectivity) and can therefore be expected to be suitable for texture discrimination. In the following we will drop the arguments , and use simply etc. to refer to the collections of graphs.
2.2 EntropyBased Graph Indices
In order to turn the collections of graphs into quantitative texture descriptors suitable for texture analysis tasks, the realm of graph indices developed in quantitative graph theory lends itself as a powerful tool.
In [75], a selection of graph indices was considered for this purpose, including on one hand distancebased graph indices (the Wiener index [79], the Harary index [62] and the Balaban index [3]) and on the other hand entropybased indices (BonchevTrinajstić indices [5, 6] and Dehmer entropies [20]).
The soobtained set of texture descriptors was evaluated in [75] with respect to their discrimination power and diversity. Using nine textures from a database [60]
, texture discrimination power was quantified based on simple statistics (mean value and standard deviation) of the descriptor values within singletexture patches, calibrating thresholds for certain and uncertain discrimination of textures within the set of textures. Diversity of descriptors was measured based on the overlap in the sets of texture pairs discriminated by different descriptors. Despite the somewhat adhoc character of the threshold calibration, the study gives valuable hints for the selection of powerful subsets of the
texture descriptors.Among the descriptors being analysed, particularly the entropybased descriptors ranked medium to high regarding discrimination power for the sample set of textures. For the present work, we focus therefore on three entropy measures which we will recall in the following, namely the Dehmer entropies and as well as Bonchev and Trinajstić’s mean information on distances . The latter is restricted by its construction to unweighted graphs, and is therefore used with the unweighted Dijkstra trees and . The Dehmer entropies can be combined with all six graph collections.
In [75], the Dehmer entropies on the patch graphs and achieved the highest rates of certain discrimination of textures, and outperformed the Haralick features included in the study. Some of the other descriptors based on Dehmer entropies as well as the BonchevTrinajstić information measures achieved middle ranks, combining medium rates of certain discrimination with uncertain discrimination of almost all other texture pairs; thereby, they were still comparable to Haralick features and distancebased graph indices.
2.2.1 Shannon’s Entropy
The measures considered here are based on Shannon’s entropy [70]
(2) 
that measures the information content of a discrete probability measure
, , . (Note that for one has to set in (2) by limit.)2.2.2 Bonchev and Trinajstić’s Mean Information on Distances
Introduced in [6] and further investigated in [5], the mean information on distances is the entropy measure resulting from an information functional on the path lengths in a graph. Let a graph with vertices be given, and let denote the length of a shortest path from to in (unweighted, i.e. each edge counting ). Let be the diameter of . For each , let
(4) 
The mean information on distances then is the entropy measure based on the information functional , i.e.
(5) 
Let us shortly mention that [6] also introduces the mean information on realised distances , which we will not further consider here. As an entropy measure, can be derived from the information functional on the set of all vertex pairs , , of . As pointed out in [75], can be generalised straightforwardly to edgeweighted graphs by measuring distances with edge weights, but a similar generalisation of would be degenerated because in generic cases all edgeweighted distances in a graph will be distinct, leading to for all realised values . Therefore we will use the mean information on distances only with the unweighted graphs and .
2.2.3 Dehmer Entropies
The two entropy measures and for unweighted graphs were introduced in [20]. Their high discriminative power for large sets of graphs was impressively demonstrated in [26]. Both measures rely on information functionals on the vertex set of whose construction involves spheres of varying radius around . Note that the sphere in is the set of vertices with .
For the information functional on vertices of an unweighted graph is defined as [20]
(6) 
where
(7) 
is the cardinality of the sphere around , with positive parameters . (Note that [20] used a general exponential with base . For the purpose of the present paper, however, this additional parameter is easily eliminated by multiplying the coefficients with .)
For the information functional relies on the quantities
(8) 
i.e. is the sum of distances from to all points in its sphere. With similar parameters as before, one defines then
(9) 
As pointed out in [75], both information functionals, and thus the resulting entropy measures , , can be adapted to edgeweighted graphs via
(10)  
(11) 
where distances are now measured using the edge weights, and
is a decreasing function interpolating a reverse partial sum series of the original
coefficients.Further following [75], we focus on the specific choice
(12) 
(an instance of the exponential weighting scheme from [26]) and obtain accordingly with a positive constant , which yields
(13)  
(14) 
with a positive constant , as the final form of the information functionals for our construction of texture descriptors.
3 Geodesic Active Contours
We use for our experiments a wellestablished segmentation method based on partial differential equations (PDE). Introduced in [11, 42], geodesic active contours (GAC) perform a contrastbased segmentation of a (greyscale) input image .
Of course, other contrastbased segmentation methods could be chosen, including clustering [15, 50] or graphcut methods [8, 39]. Advantages or disadvantages of these methods in connection with graphentropybased texture descriptors may be studied in future work. For the time being we focus on the texture descriptors themselves, thus it matters to use just one wellestablished standard method.
3.1 Basic GAC Evolution for Greyscale Images
From the input image , a Gaussiansmoothed image is computed, where is a Gaussian kernel of standard deviation . From , one computes an edge map with the help of a decreasing and bounded function with . A popular choice for is
(15) 
which has originally been introduced by Perona and Malik [59] as a diffusivity function for nonlinear diffusion filtering of images. Herein, is a contrast parameter that acts as a threshold distinguishing high gradients (indicating probable edges) from small ones.
In addition to the input image, GAC require an initial contour (a regular closed curve) specified e.g. by user input. This contour is embedded into a level set function , i.e. is a sufficiently smooth function in the image plane whose zero levelset (set of all points in the image plane for which ) is the given contour. For example, can be introduced as a signed distance function: is zero if lies on ; it is minus the distance of to if lies in the region enclosed by , and plus the same distance if lies in the outer region.
One takes then as initial condition at time for the parabolic PDE
(16) 
for a timedependent levelset function . At each time , an evolved contour can be extracted from as zero levelset. For suitable input images and initialisations and with appropriate parameters, the contours lock in at a steady state that provides a contrastbased segmentation.
To understand equation (16) one can compare it to the curvature motion equation that would evolve all level curves of by an inward movement proportional to their curvature. In (16), this inward movement of level curves is modulated by the edge map , which slows down the curve displacement at highcontrast locations, such that contours stick there.
The name geodesic active contours is due to the fact that the contour evolution associated to (16) can be understood as gradient descent for the curve length of the contour in an imageadaptive metric (a Riemannian metric whose metric tensor is times the unit matrix), thus yielding a final contour that is a geodesic with respect to this metric.
3.2 Force Terms
In its pure form (16), geodesic active contours require the initial contour (at least most of it) to be placed outside the object to be segmented. In some situations, however, it is easier to specify an initial contour inside an object, particularly if the intensities within the object are fairly homogeneous but many spurious edges irritating the segmentation exist in the background region.
Moreover, despite being able to handle also topology changes such as a splitting from one to several level curves encircling distinct objects to some extent, it has limitations when the topology of the segmentation becomes too complex.
As a remedy to both difficulties, one can modify (16) by adding a force term to its righthand side. Adding a force term was proposed first in [14] (by the name of balloon force) whereas the specific form of the force term weighted with was proposed in [11, 42, 51]. Depending on the sign of , this force exercises an inward () or outward () pressure on the contour, which (i) speeds up the curve evolution, (ii) supports the handling of complex segmentation topologies, and (iii) enables for also segmentation of objects from initial contours placed inside.
The modified GAC evolution with force term,
(17) 
will be our segmentation method when performing texture segmentation based on only one quantitative texture descriptor. In this case, the texture descriptor will be used as input image from which the edge map is computed.
3.3 MultiChannel Images
It is straightforward to extend the GAC method, including its modified version with force term to multichannel input images where each location in the image plane is assigned an tuple of intensities. A common case, with , are RGB colour images.
In fact, equations (16) and (17) incur almost no change as even for multichannel input images, one computes the evolution of a simple realvalued levelset function . What is changed is the computation of the edge map : Instead of the gradient norm one uses the Frobenius norm of the Jacobian where is the Gaussiansmoothed input image, with , yielding as edge map.
Equation (17) with this edge map will be our segmentation method when performing texture segmentation with multiple texture descriptors. The input image will have the individual texture descriptors as channels. To weight the influence of texture descriptors, the channels may be multiplied by scalar factors.
3.4 Remarks on Numerics
For numerical computation we rewrite PDE (17) as
(18) 
(where we have omitted the argument of which is a fixed input function to the PDE anyway).
Following established practice, we use then an explicit (Euler forward) numerical scheme where the righthand side is spatially discretised as follows. The first term, , is discretised using central differences. For the second term, , an upwind discretisation [17, 65] is used in which the upwind direction for is determined based on the centraldifference approximations of . The third term, , is discretised with an upwind discretisation, too. Here, the upwind direction depends on the components of and the sign of .
Although a detailed stability analysis for this widely used type of explicit scheme for the GAC equation seems to be missing, the scheme works for time step sizes up to ca. (for spatial mesh sizes of ) for , which needs to be reduced somewhat for nonzero . In our experiments in Section 4 we use consistently .
For the levelset function , we use the signed distance function of the initial contour as initialisation. Since during the evolution the shape of the levelset function changes, creating steeper ascents in some regions but flattening slopes elsewhere, we reinitialise to the signed distance function of its current zero level set every iterations.
4 Texture Segmentation Experiments
In this section, we present experiments on two synthetic and one realworld test image that demonstrate that graphentropybased texture descriptors can be used for texturebased segmentation. An experiment similar to our second synthetic example was already presented in [76].
4.1 First Synthetic Example
In our first example we use a synthetic image, shown in Figure 1(c), which is composed from two textures, see Figure 1(a) an (b), with a simple shape (the letter ‘E’) switching between the two. Note that the two textures were also among the nine textures studied in [75] for the texture discrimination task. With its use of realworld textures, this synthetic example mimicks a realistic segmentation task. Its synthetic construction warrants at the same time a ground truth to compare segmentation results with.
Figure 2 shows the results of eight graphentropybased texture descriptors for the test image. In particular, the combination of the Dehmer entropy with all six graph variants from Section 2.1 is shown as well as on the weighted Dijkstra trees in nonadaptive and adaptive patches. Patch radii were fixed to for both nonadaptive and adaptive patches, whereas the contrast scale was chosen as . These parameter settings have already been used in [75]; they are based on values that work across various test images in the context of morphological amoeba image filtering. Further investigation of variation of these parameters is left for future work.
Visual inspection of Figure 2 indicates that for this specific textured image, the entropy measure separates the two textures well in combination in particular with the weighted Dijkstra tree settings, in both adaptive and nonadaptive patches, see frames (b) and (f). The other results in frames (c), (e) and (g) show insufficient contrast along some parts of the contour of the letter ‘E’. The index in frame (a), which was identified in [75] as a descriptor with high texture discrimination power, does not distinguish these two textures clearly but creates massive oversegmentation within each of them. In a sense, this oversegmentation is the downside of the high texture discrimination power of the descriptor. Note, however, that also other based descriptors tend to this kind of oversegmentation.
Regarding the index, Figure 2 (d) and (h), there is a huge difference between the adaptive and nonadaptive patch setting. Distinction of the two textures is much better when using nonadaptive patches.
Finally, we show in Figure 3 geodesic active contour segmentation of the test image with the descriptor . We start from an initial contour inside the ‘E’ shape, see Figure 3(a), and use an expansion force () to drive the contour evolution in an outward direction. Frames (b) and (c) show two intermediary stages of the evolution, where it is evident that the contour starts to align with the boundary between the two textures. Frame (d) shows the steady state reached after iterations (). Here, the overall shape of the letter ‘E’ is reasonably approximated, with deviations coming from smallscale texture details.
Precision of the segmentation could be increased slightly by combining more than one texture descriptor. We do not follow this direction at this point.
4.2 Second Synthetic Example
In our second experiment, Figure 4, we use again a synthetic test image where foreground and background segments are defined using the ‘E’ letter shape such that again the desired segmentation result is known as a ground truth. Also in this image we combine two realistic textures which can be seen as a simplified version of the foreground and background textures of the realworld test image, Figure 5
(a), used in the next section. This time, the foreground is filled with a stripe pattern whereas the background is noise with uniform distribution in the intensity range
, see Figure 4(a). In frames (b)–(d) of Figure 4 we show the texture descriptors based on with the three graph settings in adaptive patches, using again and . The descriptor that was used for the segmentation in Section 4.1 visually does not distinguish foreground from background satisfactorily here, whereas that provided no clear distinction of the two textures in Section 4.1 clearly stands out here. This underlines the necessity of considering multiple descriptors which complement each other in distinguishing textures.Our GAC segmentation of the test image shown in frames (e)–(h) is based on the texture descriptor and quickly converges to a fairly good approximation of the segment boundary.
4.3 RealWorld Example
In our last experiment, Figure 5, we consider a realworld image showing a zebra, see frame (a). In a sense, this experiment resembles the synthetic case from Section 4.2 because again a foreground dominated by a clear stripe pattern is to be distinguished from a background filled with smallscale detail. In frames (b)–(e) four texture descriptors are shown. With regard to the higher resolution of the test image, the patch radius has been chosen slightly larger than in the previous examples, , whereas was retained. As can be seen in frame (b), shows the same kind of oversegmentation behaviour as observed in Section 4.1, however, it also separates a large part of the zebra shape well from the background. The second descriptor, in frame (c), appears unsuitable here because it does not yield sufficiently similar values within the black and white stripes to recognise these as a common texture. In contrast, and in Figure 5(d) and (e), respectively, achieve this largely.
Our GAC segmentation in frames (f)–(i) uses a larger Gaussian kernel for presmoothing than before, , to flatten out smallscale inhomogeneities in the texture descriptors, and combines the two descriptors from (d) and (e). With these data, a large part of the zebra including the head and front part of the torso is segmented in the final steady state. Not included are the rear part and the forelegs. Note that in the foreleg part the stripes are much thinner than in the segmented region, apparently preventing the recognition of this texture as a continuation of the one from the head and front torso. In contrast, the rear part of the torso shown very thick stripes which under the patch size chosen decompose into separate (homogeneous) textures for black and white stripes, as is also visible in the texture descriptors (d) and (e) themselves. Further investigation of parameter variations and inclusion of more texture descriptors might improve this preliminary result in the future.
5 Analysis of GraphEntropyBased Texture Descriptors
In this section, we undertake an attempt to analyse the texture descriptors based on the entropy measures and , focussing on the question what properties of textures are actually encoded in their information functionals and . Part of this analysis is on a heuristic level at the present stage of research, and future work will have to be invested to add precision to these arguments. This applies to the limiting procedure in Section 5.2 as well as to the concept of local fractal dimension arising in Section 5.3. We believe, however, that even in its present shape, the analysis provided in the following gives valuable intuition about the principles underlying our texture descriptors.
5.1 Rewriting the Information Functionals
For the purpose of our analysis, we generalise the information functional from (6) directly to edgeweighted graphs by replacing the series of cardinalities from (7) with the monotone increasing function ,
(19) 
that measures volumes of spheres with arbitrary radius. Assuming the exponential weighting scheme (12) and large this yields
(20) 
An analogous generalisation of (9) is
(21) 
5.2 Infinite Resolution Limits of Graphs
We assume now that the image is sampled successively on finer grids, with decreasing . Note that the number of vertices of any region of the edgeweighted pixel graph, or any of the derived edgeweighted graphs introduced in Section 2.1, grows in this process with . By using the volumes of spheres instead of the original cardinalities, (19) provides a renormalisation that compensates this effect in (20).
Thus, it is possible to consider the limit case . In this limit case, graphs turn into metric spaces representing the structure of a spacecontinuous image. Additionally, these metric spaces are endowed with a volume measure which is the limit case of the discrete measures on graphs given by vertex counting.
In simple cases, these metric spaces with volume measure can be manifolds. For example, for a homogeneous grey image without any contrast, the limit of the edgeweighted pixel graph is an approximation to a plane, i.e. a dimensional manifold. For an image with extreme contrasts in one direction, e.g. a stripe pattern, the edgeweighted pixel graphs will be path graphs, resulting in a metric space as limit which is essentially a dimensional manifold. Finally, in the extreme case of a noise image in which neighbouring pixels have almost nowhere similar greyvalues, the graph will practically decompose into numerous isolated connected components, corresponding to a discrete space of dimension .
For more general textured images, the limit spaces will possess a more complicated topological structure. At the same time, it remains possible, of course, to measure volumes of spheres of different radii in these spaces. Clearly, sphere volumes will increase with sphere radius. If they fulfil a power law, the (possible noninteger) exponent can immediately be interpreted as a dimension. The space itself is then interpreted as some type of fractal [52]. The dimension concept underlying here is almost that of the Minkowski dimension (closely related to Hausdorff dimension) that is frequently used in fractal theory, with the difference that the volume measure here is inside the object being measured instead of in an embedding space. Based on the above reasoning, values of the dimension will range between and .
Note that even in situations in which there is no global power law for the sphere volumes, and therefore no global dimension, power laws, possibly with varying exponents, will still be approximated for a given sphere centre in suitable ranges of the radius, thus allowing to define the fractal dimension as a quantity varying in space and resolution. This resembles the situation with most fractal concepts being applied to realworld data: the power laws that are required to hold for an ideal fractal across all scales will be found only for certain ranges of scales in reality.
Dijkstra trees, too, turn into dimensional manifolds in the case of sharp stripe images; for other cases they will also yield fractals. Fractal dimensions, wherever applicable, will be below those observed with the corresponding full edgeweighted pixel graphs, thus, the relevant range of dimensions is again bounded by from below and from above.
One word of care must be said at this point. The fractal structures obtained here as limit cases of graphs for are not identical with the image manifolds whose fractal structures are studied as a means of texture analysis in [2, 58, 72] and others. In fact, fractal dimensions of the latter, measured as Minkowski dimensions by means of the embedding of the image manifold of a greyscale image in threedimensional Euclidean space, range from to with increasing roughness of the image, whereas the dimensions measured in the present work go down from to for increasing image roughness. Whereas it can be conjectured that these two fractal structures are related, future work will be needed to gain clarity about this relationship.
5.3 Fractal Analysis
Based on the findings from the previous section, let us now assume that the limit from one of the graph structures results in a measured metric space of dimension , in which sphere volumes are given by the equation
(22) 
where
(23) 
is the volume of a unit sphere, denoting the Gamma function. Thus, we assume that interpolates the sphere volumes of Euclidean spaces for integer .
Note that this assumption has indeed two parts. The first, (22), means that a volume measure on the metric space exists that behaves homogeneously with degree with regard to distances. In the manifold case (integer ), this is the case of vanishing curvature; for general manifolds of integer dimension , (22) would hold as an approximation for small radii.
The second assumption, (23), corresponds to the Euclideanness of the metric. For edgeweighted pixel graphs based on  or neighbourhoods, the volume of unit spheres actually deviates from (23), even in the limit. However, with increasing neighbourhood size, (23) is approximated better and better. Most of the following analysis does not depend specifically on (23); thus we will return to (23) only later for numerical evaluation of information functionals.
With (22) we have
(24) 
where we have substituted . As a result, we obtain
(25) 
Analogous considerations for from (21) lead to
(26) 
As pointed out before, the metric structure of the fractal will in general be more complicated such that it does not possess a welldefined global dimension. However, such a dimension can be measured at each location and scale. The quantities and as stated in (25), (26) can then be understood as functions of the local fractal dimension in a neighbourhood of vertex where the size of the neighbourhood – the scale – is controlled by the decay of the function in the integrands of (20) and (21), respectively.
As a result, we find that the information functionals and represent distribution over the input pixels of an image patch (nonadaptive or adaptive) in which the pixels are assigned different weights dependent on a local fractal dimension measure. The entropies and then measure the local homogeneity or inhomogeneity of this dimension distribution: For very homogeneous dimension values within a patch, the density resulting from each of the information functionals , will be fairly homogeneous, implying high entropy. The more the dimension values are spread out, the more will the density be dominated by a few pixels with high values of or , thus yielding low entropy. The precise dependency of the entropy on the dimension distribution will be slightly different for and and will also depend on the choice of . Details of this dependency will be a topic of future work.
To give a basic intuition, we present in Figure 6 graphs of and as functions of the dimension for selected values of . In computing these values, the specific choice (23) for has been used.
In the left column, frames (a) and (b), we use as in our experiments in Section 4. Here, both and increase drastically by almost from to and even by from to . In the resulting information functionals and , i.e. after applying to the functions shown in the figure, even pixels with only slightly higher values of the dimension strongly dominate the entire information density within the patch.
For increasing , the rate of increment in and with becomes lower. For as shown in the second column, frames (b) and (e), of Figure 6, the variation of and is already reduced to and , respectively, such that vertices across the entire dimension range will have a relevant influence on the information density. For even larger the dependency of and on becomes nonmonotonic (as shown in (c) for with ) and even monotonically decreasing (for both and at ; not shown). It will therefore be interesting for further investigation to evaluate also the texture discrimination behaviour of the entropy measures for varying , as this may be a way to targeting the sensitivity of the measures specifically at certain dimension ranges.
In this context, however, it becomes evident that the parameter
plays two different roles at the same time. First, it steers the approximate radius of influence for the fractal dimension estimation. Here, it is important that this radius of influence is smaller than the patch size underlying the graph construction, such that the cutoff of the graphs has no significant influence on the values of the information functional at the individual vertices. Second,
determines the shape and steepness of the function (compare Figure 6) that relates the local fractal dimension to the values of the information functionals. This makes it desirable to refine in future work the parametrisation of the exponential weighting scheme (12) so that the two roles of are distributed to two parameters.6 Conclusion
In this paper, we have presented the framework of graphindexbased texture descriptors that has first been introduced in [75]. Particular emphasis was put on entropybased graph indices that have proven in [75] to afford medium to high sensitivity for texture differences.
We have extended the work from [75] in two directions. Firstly, we have stated an approach to texturebased image segmentation in which the texture descriptor framework was integrated with geodesic active contours [11, 42], a standard method for intensitybased image segmentation. This approach was already briefly introduced in [76] and is demonstrated here by a larger set of experiments, including two synthetic and one realworld example. Secondly, we have analysed one representative of the graphentropybased texture descriptors in order to gain insight about the image properties that this descriptor relies on. It turned out that it stands in close relation to measurements of fractal dimension of certain metric spaces that arise from the graphs in local image patches that underly our texture descriptors. Although this type of fractal dimension measurement in images differs from existing applications of fractal theory in image (and texture) analysis, as the latter treat the image manifold as fractal object, results indicate that the two fractal approaches are related.
Our texture descriptor framework as a whole and also both novel contributions presented here require further research. To mention some topics, we start with parameter selection of the texture descriptors. In [75, 76] as well as in this paper, most parameters were fixed to specific values based on heuristics. A systematic investigation of the effect of variations of all these parameters is part of ongoing work. Inclusion of novel graph descriptors proposed in the literature, e.g. [27], is a further option.
The algorithms currently used for the computation of graphentropybased texture descriptors need computation times in the range of minutes already for small images. As the algorithms have not been designed for efficiency, there is much room for improvement which will also be considered in future work.
Both texture discrimination and texture segmentation have been demonstrated so far rather on a proofofconcept level. Extensive evaluation on larger sets of image data is ongoing. This is also necessary to gain more insight about the suitability of particular texture descriptors from our set for specific classes of textures.
Regarding the texture segmentation framework, the conceptual break between the graphbased set of texture descriptors and the partial differential equation for segmentation could be reduced by using e.g. a graphcut segmentation method. It can be asked whether such a combination even allows for some synergy between the computation steps. This is not clear so far since the features used to weight graph edges are different: intensity contrasts in the texture descriptor phase; texture descriptor differences in the graphcut phase. In further course, the integration of graphentropybased texture descriptors into more complex segmentation frameworks will be a goal. Unsupervised segmentation approaches are not capable to handle involved segmentation tasks (like in medical diagnostics) where highly accurate segmentation can only be achieved by including prior information on the shape and appearance of the objects to be segmented. Stateoftheart segmentation frameworks therefore combine the mechanisms of unsupervised segmentation approaches with modelbased methods as introduced e.g. in [16].
On the theoretical side, the analysis of the fractal limit of the descriptor construction will have to be refined and extended to include all six graph settings from Section 2.1. Relations between the fractal structures arising from the graph construction and the more image manifold more commonly treated in fractalbased image analysis will have to be analysed. Generally, much more theoretical work deserves to be invested in understanding the connections and possible equivalences between the very disparate approaches to texture description that can be found in the literature. A graphbased approach like ours admits different directions of such comparisons. It can thus be speculated that it could play a pivotal role in understanding the relations between texture description methods, and create a unifying view on different methods that would also have implications for the understanding of texture itself.
References

[1]
J.F. Aujol and A. Chambolle.
Dual norms and image decomposition models.
International Journal of Computer Vision
, 63(1):85–104, 2005.  [2] N. Avadhanam. Robust fractal characterization of onedimensional and twodimensional signals. Master’s thesis, Graduate Faculty, Texas Tech University, USA, 1993.
 [3] A. Balaban. Highly discriminating distancebased topological index. Chemical Physics Letters, 89:399–404, 1982.
 [4] J.M. Barbaroux, F. Germinet, and S. Tcheremchantsev. Generalized fractal dimensions: equivalences and basic properties. Journal de Mathématiques Pures et Appliquées, 80(10):977–1012, 2001.
 [5] D. Bonchev, O. Mekenyan, and N. Trinajstić. Isomer discrimination by topological information approach. Journal of Computational Chemistry, 2(2):127–148, 1981.
 [6] D. Bonchev and N. Trinajstić. Information theory, distance matrix, and molecular branching. Journal of Chemical Physics, 67(10):4517–4533, 1977.

[7]
Y. Boykov, O. Veksler, and R. Zabih.
Markov random fields with efficient approximation.
In
Proc. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
, pages 648–655, Santa Barbara, CA, June 1998.  [8] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11):1222–1239, 2001.
 [9] T. Brox, M. Rousson, R. Deriche, and J. Weickert. Colour, texture, and motion in level set based segmentation and tracking. Image and Vision Computing, 28:376–390, 2010.
 [10] A. Buades, B. Coll, and J. Morel. A nonlocal algorithm for image denoising. In Proc. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 60–65, San Diego, CA, June 2005. IEEE Computer Society Press.
 [11] V. Caselles, R. Kimmel, and G. Sapiro. Geodesic active contours. In Proc. Fifth International Conference on Computer Vision, pages 694–699, Cambridge, MA, June 1995. IEEE Computer Society Press.
 [12] U. Cikalova, M. Kroening, J. Schreiber, and Y. Vertyagina. Evaluation of Alspecimen fatigue using a “smart sensor”. Physical Mesomechanics, 14(5–6):308–315, 2011.
 [13] U. Clarenz, M. Rumpf, and A. Telea. Surface processing methods for point sets using finite elements. Computers and Graphics, 28:851–868, 2004.
 [14] L. D. Cohen. On active contour models and balloons. CVGIP: Image Understanding, 53(2):211–218, Mar. 1991.
 [15] D. Comaniciu and P. Meer. Robust analysis of feature spaces: color image segmentation. In Proc. 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 750–755, San Juan, Puerto Rico, June 1997. IEEE Computer Society Press.
 [16] T. F. Cootes and C. J. Taylor. Statistical models of appearance for computer vision. Technical report, University of Manchester, UK, Oct. 2001.
 [17] R. Courant, E. Isaacson, and M. Rees. On the solution of nonlinear hyperbolic differential equations by finite differences. Communications on Pure and Applied Mathematics, 5(3):243–255, 1952.
 [18] A. Cross, R. Wilson, and E. Hancock. Inexact graoh matching using genetic search. Pattern Recognition, 30(6):953–970, 1997.
 [19] R. da S. Torres, A. X. Falcão, and L. da F. Costa. A graphbased approach for multiscale shape analysis. Pattern Recognition, 37:1163–1174, 2004.
 [20] M. Dehmer. Information processing in complex networks: Graph entropy and information functionals. Applied Mathematics and Computation, 201:82–94, 2008.

[21]
M. Dehmer and F. EmmertStreib.
Comparing large graphs efficiently by margins of feature vectors.
Applied Mathematics and Computation, 188:1699–1710, 2007.  [22] M. Dehmer and F. EmmertStreib, editors. Quantitative Graph Theory: Mathematical Foundations and Applications. CRC Press, Boca Raton, 2014.
 [23] M. Dehmer, F. EmmertStreib, and J. Kilian. A similarity measure for graphs with low computational complexity. Applied Mathematics and Computation, 182:447–459, 2006.
 [24] M. Dehmer, F. EmmertStreib, and A. Mehler, editors. Towards an Information Theory of Complex Networks: Statistical Methods and Applications. Birkhäuser Publishing, 2012.
 [25] M. Dehmer, F. EmmertStreib, and S. Tripathi. Largescale evaluation of molecular descriptors by means of clustering. PloS ONE, 8(12):e83956, 2013.
 [26] M. Dehmer, M. Grabner, and K. Varmuza. Information indices with high discriminative power for graphs. PLoS ONE, 7(2):e31214, 2012.
 [27] M. Dehmer, Y. Shi, and A. Mowshowitz. Discrimination power of graph measures based on complex zeros of the partial Hosoya polynomial. Applied Mathematics and Computation, 250:352–355, 2015.
 [28] E. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959.
 [29] A. Elmoataz, O. Lezoray, V.T. Ta, and S. Bougleux. Partial difference equations on graphs for local and nonlocal image processing. In O. Lezoray and L. Grady, editors, Image Processing and Analysis with Graphs: Theory and Practice, chapter 7, pages 174–206. CRC Press, Boca Raton, 2012.
 [30] F. EmmertStreib and M. Dehmer. Topological mappings between graphs, trees and generalized trees. Applied Mathematics and Computation, 186:1326–1333, 2007.
 [31] F. EmmertStreib and M. Dehmer. Exploring statistical and population aspects of network complexity. PLoS ONE, 7(5):e34523, 2012.
 [32] M. Ferrer and H. Bunke. Graph edit distance—theory, algorithms, and applications. In O. Lezoray and L. Grady, editors, Image Processing and Analysis with Graphs: Theory and Practice, chapter 13, pages 383–422. CRC Press, Boca Raton, 2012.
 [33] B. Furtula, I. Gutman, and M. Dehmer. On structuresensitivity of degreebased topological indices. Applied Mathematics and Computation, 219:8973–8978, 2013.
 [34] D. Gabor. Theory of communication. Journal of the Institution of Electrical Engineers, 93:429–457, 1946.
 [35] B. Georgescu, I. Shimshoni, and P. Meer. Mean shift based clustering in high dimensions: a texture classification example. In Proc. 2003 IEEE International Conference on Computer Vision, volume 1, pages 456–463, Nice, Oct. 2003.
 [36] R. Haralick. Statistical and structural approaches to texture. Proceedings of the IEEE, 67(5):786–804, May 1979.
 [37] R. Haralick, K. Shanmugam, and I. Dinstein. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, 3(6):610–621, Nov. 1973.
 [38] H. Hosoya. Topological index: a newly proposed quantity characterizing the topological nature of structural isomers of saturated hydrocarbons. Bulleting of the Chemical Society of Japan, 44(9):2332–2339, 1971.

[39]
H. Ishikawa.
Graph cuts—combinatorial optimization in vision.
In O. Lezoray and L. Grady, editors, Image Processing and Analysis with Graphs: Theory and Practice, chapter 2, pages 25–63. CRC Press, Boca Raton, 2012.  [40] H. Ishikawa and D. Geiger. Segmentation by grouping junctions. In Proc. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 125–131, Santa Barbara, CA, June 1998.
 [41] O. Ivanciuc, T.S. Balaban, and A. Balaban. Design of topological indices. Part 4. Reciprocal distance matrix, related local vertex invariants and topological indices. Journal of Mathematical Chemistry, 12(1):309–318, 1993.
 [42] S. Kichenassamy, A. Kumar, P. Olver, A. Tannenbaum, and A. Yezzi. Gradient flows and geometric active contour models. In Proc. Fifth International Conference on Computer Vision, pages 810–815, Cambridge, MA, June 1995. IEEE Computer Society Press.
 [43] P. V. Kuznetsov, V. E. Panin, and J. Schreiber. Fractal dimension as a characteristic of deformation stages of austenite stainless steel under tensile load. Theoretical and Applied Fracture Mechanics, 35:171–177, 2001.
 [44] G. Lendaris and G. Stanley. Diffraction pattern sampling for automatic pattern recognition. Proceedings of the IEEE, 58(2):198–216, 1970.
 [45] R. Lerallut, É. Decencière, and F. Meyer. Image processing using morphological amoebas. In C. Ronse, L. Najman, and E. Decencière, editors, Mathematical Morphology: 40 Years On, volume 30 of Computational Imaging and Vision, pages 13–22. Springer, Dordrecht, 2005.
 [46] R. Lerallut, É. Decencière, and F. Meyer. Image filtering using morphological amoebas. Image and Vision Computing, 25(4):395–404, 2007.
 [47] O. Lezoray and L. Grady. Graph theory concepts and definitions used in image processing and analysis. In O. Lezoray and L. Grady, editors, Image Processing and Analysis with Graphs: Theory and Practice, chapter 1, pages 1–24. CRC Press, Boca Raton, 2012.
 [48] O. Lezoray and L. Grady, editors. Image Processing and Analysis with Graphs: Theory and Practice. CRC Press, Boca Raton, 2012.
 [49] S. Lu and K. Fu. A syntactic approach to texture analysis. Computer Graphics and Image Processing, 7:303–330, 1978.

[50]
L. Lucchese and S. K. Mitra.
Unsupervised segmentation of color images based on kmeans clustering in the chromaticity plane.
In Proc. IEEE Workshop on ContentBased Access of Image and Video Libraries, pages 74–78, Fort Collins, CO, 1999.  [51] R. Malladi, J. Sethian, and B. Vemuri. Shape modeling with front propagation: a level set approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17:158–175, 1995.
 [52] B. Mandelbrot. The Fractal Geometry of Nature. W. H. Freeman and Company, 1982.
 [53] P. Maragos. Fractal aspects of speech signals: dimension and interpolation. In Proc. 1991 International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 417–420, Toronto, Ontario, Canada, May 1991.
 [54] P. Maragos and F.K. Sun. Measuring the fractal dimension of signals: morphological covers and iterative optimization. IEEE Transactions on Signal Processing, 41(1):108–121, 1993.
 [55] L. Najman and F. Meyer. A short tour of mathematical morphology on edge and vertex weighted graphs. In O. Lezoray and L. Grady, editors, Image Processing and Analysis with Graphs: Theory and Practice, chapter 6, pages 141–173. CRC Press, Boca Raton, 2012.
 [56] S. Osher, A. Solé, and L. Vese. Image decomposition and restoration using total variation minimization and the norm. Multiscale Modeling and Simulation, 1(3):349–370, 2003.
 [57] N. Paragios and R. Deriche. Geodesic active regions: A new paradigm to deal with frame partition problems in computer vision. Journal of Visual Communication and Image Representation, 13(1/2):249–268, 2002.
 [58] A. P. Pentland. Fractalbased description of natural scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):661–674, 1984.
 [59] P. Perona and J. Malik. Scale space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:629–639, 1990.
 [60] R. Picard, C. Graczyk, S. Mann, J. Wachman, L. Picard, and L. Campbell. Vistex database. Online ressource, http://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html, 1995.
 [61] V. Pitsikalis and P. Maragos. Analysis and classification of speech signals by generalized fractal dimension features. Speech Communications, 51:1206–1223, 2009.
 [62] D. Plavšić, S. Nikolić, and N. Trinajstić. On the Harary index for the characterization of chemical graphs. Journal of Mathematical Chemistry, 12(1):235–250, 1993.
 [63] K. Riesen and H. Bunke. Approximate graph edit distance computation by means of bipartite graph matching. Image and Vision Computing, 27:950–959, 2009.
 [64] A. Rosenfeld and M. Thurston. Edge and curve detection for visual scene analysis. IEEE Transactions on Computers, 20(5):562–569, 1971.
 [65] E. Rouy and A. Tourin. A viscosity solutions approach to shapefromshading. SIAM Journal on Numerical Analysis, 29(3):867–884, 1992.
 [66] S. Roy and I. Cox. Maximumflow formulation of the camera stereo correspondence problem. In Proc. 1998 IEEE International Conference on Computer Vision, pages 492–499, Bombay, Jan. 1998.
 [67] C. Sagiv, N. Sochen, and Y. Zeevi. Integrated active contours for texture segmentation. IEEE Transactions on Image Processing, 15(6):1633–1646, June 2006.
 [68] B. Sandberg, T. Chan, and L. Vese. A levelset and Gaborbased active contour algorithm for segmenting textured images. Technical Report CAM0239, Department of Mathematics, University of California at Los Angeles, CA, U.S.A., July 2002.
 [69] A. Sanfeliu and K.S. Fu. A distance measure between attributed relational graphs for pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 13(3):353–362, 1983.
 [70] C. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379–423, 623–656, 1948.
 [71] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–906, 2000.
 [72] P. Soille and J.F. Rivest. On the validity of fractal dimension measurements in image analysis. Journal of Visual Communication and Image Representation, 7(3):217–229, 1996.
 [73] R. Sutton and E. Hall. Texture measures for automatic classification of pulmonary disease. IEEE Transactions on Computers, 21(7):667–676, 1972.
 [74] J. Wang, K. Zhang, and G.W. Chen. Algorithms for approximate graph matching. Information Sciences, 82:45–74, 1995.
 [75] M. Welk. Discrimination of image textures using graph indices. In M. Dehmer and F. EmmertStreib, editors, Quantitative Graph Theory: Mathematical Foundations and Applications, chapter 12, pages 355–386. CRC Press, 2014.
 [76] M. Welk. Amoeba techniques for shape and texture analysis. In M. Breuß, A. Bruckstein, P. Maragos, and S. Wuhrer, editors, Perspectives in Shape Analysis. Springer, Cham, 2016. in press.
 [77] M. Welk and M. Breuß. Morphological amoebas and partial differential equations. In P. W. Hawkes, editor, Advances in Imaging and Electron Physics, volume 185, pages 139–212. Elsevier Academic Press, 2014.
 [78] M. Welk, M. Breuß, and O. Vogel. Morphological amoebas are selfsnakes. Journal of Mathematical Imaging and Vision, 39:87–99, 2011.
 [79] H. Wiener. Structural determination of paraffin boiling points. Journal of the American Chemical Society, 69(1):17–20, 1947.
 [80] L. Zhu, W. Ng, and S. Han. Classifying graphs using theoretical metrics: a study of feasibility. In J. Xu, G. Yu, S. Zhou, and R. Unland, editors, Database Systems for Advanced Applications, volume 6637 of Lecture Notes in Computer Science, pages 53–64. Springer, Berlin, 2011.
 [81] S. Zucker. Toward a model of texture. Computer Graphics and Image Processing, 5:190–202, 1976.