1 Introduction
Image segmentation, i.e. the task of decomposing an image into disjoint regions that are roughly homogeneous in a suitable sense, is one of the fundamental image processing problems. If three or more regions are sought, one speaks of multiphase segmentation. This problem has been studied thoroughly in the literature and entirely different concepts have been put forward as the basis for image segmentation, such as fuzzy region competition [21], contour detection [2], random walks [9], markov random fields [22], just to name a few. Due to the variety of proposed methods, providing a comprehensive list is beyond the scope of this article, but we refer the interested reader to [31]. Then there is, of course, the class of variational approaches based on the famous MumfordShah energy [25].
The most straightforward application in multiphase segmentation is to divide images into regions based on their gray or color intensities [8]. A more complex task is to segment images based on their local structure. This has applications in texture segmentation [28], as well as many medical applications, such as the segmentation of blood vessels [14]. Algorithms for structure classification and segmentation usually extract local features from the image, which analyze important properties of the structures of interest, such as the image intensity, position and orientation of edges, or the local frequency spectrum [29]. In the case of texture segmentation, Gabor filters are arguably the most popular source of feature discrimination [32], often combined with other filters in socalled local spectral histograms [23]
. Other methods rely on linear transforms, such as the shorttime Fourier transform
[3], wavelet transforms [7], or, more recently, the Stockwell transform [11]. While the part of this paper on texture segmentation uses wellproven spectral histograms to recognize regions, it differs from established methods by their integration into a variational framework, allowing to control the regions’ connectedness.Dealing with complex structures, such as textures, often implies highdimensionality of the parameters describing the problem. However, in image segmentation, one is mostly interested in classifying structure into a few categories, potentially allowing for lowerdimensional representations. Dimension reduction of highdimensional data is an immensely broad topic
[30] and finds applications in many different areas of research. There exist different approaches, but the most widely used techniques are arguably clustering [27]and principal component analysis (PCA)
[19]. The latter two are connected in the sense that the relaxed solution of means, one of the most popular clustering algorithms, is given by principal components [10]. PCA has been investigated in the context of variational image segmentation before, both as a means for dimension reduction [26] and to increase the contrast of colortexture indicators in natural images [17].In materials science, an important application of structurebased segmentation is the analysis of crystals. Available methods are based on variational minimization of MumfordShah energies that require the local stencil of a reference crystal as prior knowledge [4, 5, 12].
1.1 Key contributions

a widely applicable framework for image segmentation by structure is discussed, including a novel combination of PCA of highdimensional features, MumfordShah and a robust initialization strategy, which allows for a broad choice of feature descriptors

the framework is shown to work very well, even for extremely noisy data, in crystal segmentation, where it generalizes existing methods in the sense that no apriori information about the crystals is required
2 Variational multiphase segmentation
In this section, we briefly recall the MumfordShah model [25] for multiphase segmentation based on suitable indicator functions. Furthermore, we recall a convexification approach that enables an efficient numerical minimization of the model. Let . The task is to divide into pairwise disjoint regions , based on given indicator functions . can be interpreted as the cost of putting a point into the set . For instance, if an image is supposed to be segmented based on its gray values, possible indicator functions are . Here, is the average gray value of in the th region.
A segmentation of based on the indicator functions that guarantees a certain regularity of the segments can be achieved by minimizing the MumfordShah energy [25]:
(1) 
Here, denotes the perimeter of the set in [1]. Roughly speaking, the perimeter is the length of the boundary of , not counting the parts of the boundary that are also on the boundary of . This problem is hard to address numerically since the unknown variables are sets. In particular, its discrete counterpart, known as Pott’s model, is NPhard. Thus, various convex relaxation approaches have been proposed in the past. For the sake of simplicity, we use one of the most straightforward approaches, given in [34]. Let us stress that our framework does not rely on this particular choice, but can also be combined with more sophisticated convexification approaches. Let
(2) 
where is a vector valued labeling function and
(3) 
is the admissible set. Here,
(4) 
denotes the total variation and is the space of functions of bounded variation, i.e. the space of Lebesgue integrable functions with finite total variation. Then, the convex relaxation of (1) is to minimize over the set . The minimizer can be interpreted as a soft segmentation and can be converted into a hard segmentation by setting .
In order to address this minimization numerically, a discretization of the energy (2) and the admissible set (3) is required. Let be a regular 2D pixel grid. We use piecewise constant approximations and for . The corresponding column vectors and matrices of all pixel values are denoted by a boldface letter, e.g. . Furthermore, we denote with the discrete gradient operator corresponding to the grid and forward differences. Using this operator to discretize the total variation (4), the minimization of the discretized energy (2) can be posed as the following discrete saddle point problem:
(5) 
where is the discrete counterpart of and discretizes from (4) for at node . Problems of this form can be solved with the ChambollePock algorithm [6], summarized in Algorithm 1.
The required resolvent operators are given by
(6)  
(7) 
Here, and denotes the orthogonal projection of onto the set . This projection can be calculated with operations using an iterative algorithm described in [24]. In this work, all numerical experiments use the parameters . The regularization parameter is chosen as (Table 2), (Table 1, Figure 1) and (Figure 2).
3 Local features for structure characterization
3.1 Description and relation to MumfordShah
Our aim is to provide a method to segment images into regions of different structure based on the information from local features. In the discrete setting, local features corresponding to a pixel are encoded in the values of an input image in a window centered at . Here, determines the scale that is still considered to be local. From these values, features are extracted by an operator of the form , that should fulfill certain properties, which we will detail later. Applying to the matrix containing the image pixel values in the window gives the feature vector corresponding to the pixel :
(8) 
Let , denote the sought discrete regions, i.e. the true sets of pixels belonging to the different structure regions. Then, a suitable feature extractor (as defined in (8)) for discriminating regions of different structures can be characterized by the following two properties:
(9)  
is large,  (10) 
i.e. local features should vary as little as possible within each region and offer as much contrast as possible between different regions. Examples for robust feature extractors for texture and crystal segmentation will be discussed in Section 4.
Given a suitable feature extractor and the true mean features within the different structure regions
(11) 
the following indicator can be used for segmentation in (5):
(12) 
In practice, the mean values are of course unknown. However, given some approximate guess for the mean values, Algorithm 1 can be applied, resulting in a segmentation . Then, the following update rule can be applied to refine the mean features
(13) 
This way, given some initial guess , both the segmentation and the mean features can be refined in an alternating fashion. Note that we use curly brackets instead of round ones for the index here, to differentiate between the iterations within Algorithm 1 and these outer iterations. Unfortunately, the result of this alternating minimization strategy depends heavily on the initial guess . In the literature, it is often suggested to approximate via clustering, which is equivalent to minimizing (1) for with (12) as indicator and with respect to both the regions and . This clustering problem is NPhard itself, but efficient iterative solvers, such as means, are available and have proven to work well in the case of lowdimensional indicator functions (e.g. in color segmentation) [8]. However, robust feature extractors suitable for structure discrimination tend to be highdimensional ( greater than or even ). In this case, clustering becomes unfeasible in practice, because the available solvers are likely to get stuck in undesired local minima when applied in such high dimensions.
3.2 Dimension reduction and decorrelation
Clustering of highdimensional data is a well studied problem in the literature [27]. It has been noted that often many of the dimensions are irrelevant for the core information expressed by a given data set and that they might mask the essential clusters due to noise. Therefore, several approaches for subspace clustering have been proposed to address this problem [20]. In our context, dimension reduction and decorrelation via principal component analysis (PCA) should work well: given a feature extractor fulfilling (9) & (10),
is of low variance for any
and, compared to this, for the set is of high variance.Performing PCA on the matrix of meancentralized features with , results in a lowerdimensional coefficient representation , where
is the matrix of eigenvectors belonging to the largest
eigenvalues of . Clustering the coefficients into clusters gives a coefficient representation , which results in the initial guess . Since we need , is a natural choice for dimension reduction.In [33], it was noted that the clustering can get stuck in local minima due to effects caused by the inhomogeneity of the features across the boundary between two regions. Unlike purely pointwise indicators (), local feature extractors cause points within about half the window size of a region boundary (in 2D space) to spread between the two mean features corresponding to the regions adjacent to the boundary (in coefficient space). In order to prevent the means minimizer from getting stuck inbetween such two clusters, Yuan et al. proposed to disregard such boundary points when clustering by thresholding an edgeness indicator, given by finite differences of the features on the scale of the window size [33]. As this approach is only based on the assumption of homogeneity of the features within each structure region, it can be used for general feature extractors and allows us to adopt this technique for the initial clustering.
While the above resembles a robust method to retrieve an initial value for in the full dimensional feature space, the dimension reduction we now have at hand also suggests itself to reduce the noise of the high dimensional feature vectors and to increase their interregion contrast within the subsequent variational segmentation framework. First of all, let us point out that as forms an orthonormal basis of , we can express the indicator (12) and thus the fidelity term in (5) in terms of and :
(14) 
Furthermore, definition (13) can be rewritten as
(15) 
i.e. the mean values can be updated using the coefficients instead of the feature vectors . Reducing the dimension to introduces an error, which can be bounded by the eigenvalues of :
(16) 
This inequality can be deduced by applying the triangle inequality to the difference of the left and righthand side in (14), splitting into and remaining parts , as well as representing as convex combinations of the columns of with nonzero coefficients in columns corresponding to .
Note that the error in (16
) can be estimated without calculating all
eigenvalues, which may become computationally expensive when the dimension becomes large:(17) 
Here, denotes the Frobenius norm. This way, the error in the fidelity term can still be monitored, when the eigenvectors of are calculated iteratively, e.g. using a deflation type of strategy. We use within the entire framework. Algorithm 2 summarizes the proposed method. All numerical experiments use . The edgeness threshold on the finite differences before clustering in Algorithm 2 is chosen as (Table 2), (Table 1, Figure 1) and (Figure 2).
3.3 Properties and advantages of the method
PCA has been used, for instance, as a concept for dimension reduction of PET data and subsequent variational segmentation [26], as well as a tool for increasing the contrast in the region descriptors for natural colortexture images in a variational segmentation framework [17]. Moreover, Yuan et al. [33]
utilized the related concept of singular value decomposition to compute a lowrank factorization of a local spectral histogram based feature matrix and estimate subsequent template features via clustering. We want to stress that the initialization step in the proposed method shares the idea of dimension reduction and clustering of features, albeit differing slightly in the details, and is, in this regard, similar to
[33]. However, our work embeds these ideas into a variational segmentation framework, which grants the following two main advantages:First, the proposed method can be applied to a very general class of feature extractors, since it only relies on the natural properties (9), (10). In particular, in contrast to [33], it does not rely on the assumption that the feature vectors are linear combinations of the mean features in each region (this assumption and its consequences are discussed later in this section). Among others, the generality of our framework allows the usage of globally coupling, convolution based linear transforms. Functions of this type, such as the shorttime Fourier transform [3], the StockwellTransform [11], or different types of wavelet transforms [7], have been studied for texture segmentation and shown their performance.
Second, the dimension reduction of the fidelity term helps to increase the degree to which it fulfills (9), (10). In particular, incorporating the PCA not only in the initial clustering, but also throughout the entire variational minimization, helps to suppress noise in the fidelity term. In Section 4.2, we will demonstrate how effective this strategy performs in the case of extremely noisy crystal images, using the Fourier transform as the feature extractor.
Unlike [23, 33], the general applicability of the proposed framework is tied to the need for a regularization of the segment boundaries, which is covered by the MumfordShah model. This need arises from an unexpected behavior of the indicator functions near segment boundaries. Due to the window size, the feature extractor sees a mixture of different segments near boundaries. For general feature extractors, this means that feature vectors near boundaries are not necessarily a linear combination of the cluster centers corresponding to the adjacent segments. In case , it might happen that the feature vector at a boundary between two regions is nearer to the mean feature vector of a third region than it is to or itself. This means that the indicator cannot necessarily identify the correct segment within a distance of to the sought segments. Note that this effect does not arise for . As mentioned above, the perimeter regularization within the MumfordShah model addresses this problem for practical purposes. For input data where the regularization alone is not sufficient, we suggest to combine feature extractors of different window sizes.
Beyond this, the proposed method is an extension of [23, 33] in the sense that the decoupling of the coefficient representation from the segmentation allows for an outer iteration to refine the mean features, whereas in [33] the clusters are solely computed from the feature matrix.
Let us point out that, since the method is based on local windows, it has the common limitations inherent to such methods. The feature scale is tied to the window size , so the method can only reliably detect regions that are at least somewhat larger than the window . Furthermore, special care has to be taken close to the boundary, where the window leaves the support of the image. Please note that the proposed method enforces the region boundaries to approach the image boundary orthogonally, which is due to the natural boundary conditions in the EulerLagrange equation of (2). This effect can be reduced by introducing ghost cells at the image boundary with a zero extension of all indicators, but it is still noticeable (cf. Figure 1).
4 Applications and numerical results
4.1 Texture segmentation
Apart from plain gray value or color intensities, among the most thoroughly studied types of structures in image segmentation are textures [18]. In the image processing sense, a texture essentially consists in a more or less strictly repetitive pattern of the spatial arrangement of the gray or color values in an image. Thus, indicators for texture segmentation need to take into account image information from a whole neighborhood, at least on the scale of the spatial distance between repetitions. There are two main classes of operators that have been proposed in the literature, namely 1) local spectral histograms combined with a suitable bank of filters and 2) localized linear transforms, both of which fall into the class of feature extractors described earlier. In the context of texture segmentation, we limit our analysis to the first class, while the second class will be utilized for crystal segmentation in the next section.
Local spectral histograms are defined as follows: first a bank of filters is selected and applied to the image, resulting in a sequence of filtered images . Then, the feature extractor is defined by
(18) 
Here, , , define the binning of the histograms and are often chosen such that and equidistant in between. Thus, the dimension of the extracted feature at every pixel is . The most popular filter used in this context is arguably the Gabor filter. Other commonly used filters are Gaussian filters, Laplacian of Gaussian filters, or just the intensity filter (i.e. the identity). For a thorough description of spectral histograms of filtered images and their application to texture segmentation, we refer to [23].
In the following, we compare our approach to the Outex_US_00000 test suite of the Outex texture database (http://www.outex.oulu.fi) and the Prague ICPR2014 contest [15] (http://mosaic.utia.cas.cz/icpr2014/).
Method  CS  O  C  CA  CO  CC 

FSEG  85.20 22.7  *4.76 3.9  6.05 5.5  *82.49 13.4  *89.23 9.6  *88.20 11.8 
Algorithm 2  *80.60 28.3  5.82 8.3  5.62 6.8  82.73 15.5  89.24 11.3  88.84 13.3 
FSEG  68.60 24.3  3.61 3.2  *6.04 8.8  70.67 17.8  80.40 13.1  73.51 18.6 
Clustering  60.00 26.8  13.36 8.1  15.52 12.0  66.41 12.5  77.54 10.0  77.51 11.2 
FSEG [33]  45.80 26.0  19.29 34.2  17.55 19.1  50.65 20.6  64.95 16.3  52.88 21.9 
On the Outex database, we provide a thorough comparison between the proposed method and FactorizationBased Texture Segmentation (FSEG) [33]. We chose to focus on FSEG here, because it 1) ranks best in the ICPR2014 contest among methods with available code and 2) is similar to our framework. Table 1
quantifies the mean segmentation performance and its standard deviation over all 100 texture mosaics from the Outex_US_00000 test suite. Three different versions of FSEG, described in the caption of Table
1, are used for this comparison. Note that running FSEG without its postprocessing (first row) makes sense in this case, since the number of segments is known. However, this also disables filling holes in the segments, as seen in the sixth column of Figure 1. While FSEG performs best in correct segmentation (CS), and FSEG achieves the smallest omission error (O), our method ranks highest in all remaining measures. Note that in Table 1 we also compare our method to plain clustering in order to evaluate the benefit of 1) the improved initialization strategy via PCA and 2) the subsequent variational optimization including region boundary smoothing. Indeed, the proposed method performs significantly better than plain clustering. Finally, a visual inspection of Figure 1 indicates that the proposed method provides a good compromise between fidelity of region boundaries and reduction of artifacts (holes, missing regions).Method  CS  OS  US  ME  NE  O  C  CA  CO  CC 

VRAPMCFA  75.32  *11.95  *9.65  4.57  *4.63  4.51  8.87  83.50  88.16  90.73 
Algorithm 2  *72.27  18.33  9.41  4.19  3.92  *7.25  6.44  *81.13  *85.96  91.24 
FSEG [33]  69.18  14.69  13.64  5.13  *4.63  9.25  12.55  78.22  84.44  87.38 
SegTexCol  61.19  1.92  27.02  9.33  9.05  15.17  12.12  71.69  81.16  76.34 
MW3AR8 [16]  53.66  51.40  14.21  *5.54  6.33  19.86  84.27  70.15  75.41  89.36 
RS  46.02  13.96  30.01  12.01  11.77  35.11  29.91  58.75  68.89  69.30 
Deep Brain [13]  36.20  41.87  53.87  7.38  9.06  47.53  99.56  49.97  62.62  70.08 
Next, we compare the proposed method to results from the Prague ICPR2014 contest [15]. Here, we use the same feature extractors as above (on the lightness channel), except that the kernel size is omitted. Therefore, we add an intensity filter on all three channels (in L*a*b color space) to each spectral histogram. The number of segments is estimated as with . Since this estimate is not precise (even for an optimal choice of ), we additionally employ FSEG’s postprocessing. Table 2 quantifies the mean segmentation quality over all 80 colored texture mosaics from the Prague ICPR2014 contest dataset. While our method produces larger oversegmentation (OS) than the other bestranked methods, indicating stronger overestimation of the number of segments, it performs best for undersegmentation (US), indicating a good coverage of all ground truth segments, reflecting the good initialization strategy. Moreover, our method performs secondbest for correct segmentation (CS), omission error (O), class accuracy (CA) and correct assignment (CO). Most notably, according to all other presented measures and in total half of them, our method performs best among all competitors.
Note that VRAPMCFA resolves fine boundary features but produces labeling noise, whereas our method smoothes region boundaries in favor of suppressing labeling noise. Thus, it depends on the application which of the two methods is likely to be more suitable.
4.2 Unsupervised crystal segmentation
A fundamental research topic in materials science is the analysis and modeling of crystals. Modern transmission electron microscopes (TEM) allow for imaging at atomic scale, which makes the crystal grid visible. In a perfect setting, the crystal is given by a Bravais lattice
(19) 
where are the lattice vectors defining the orientation and spacing of the crystal. However, crystals of interest usually exhibit a more complicated behavior, like discontinuous orientation changes along socalled grain boundaries. The fully automatic analysis of grain geometries in TEM images is subject of ongoing research [12].
Available variational approaches for grain segmentation [4, 5, 12] are built on the assumption that all grains can be characterized through transformations of a local stencil , corresponding to a reference crystal given by all linear combinations of with coefficients in . Then, the MumfordShah model with an indicator function of the following type can be used [4, 5]:
(20) 
Here, denotes a suitable intensity distance function and
is an orthogonal matrix, rotating the stencil by the angle of the the
th grain relative to the reference.The need for apriori knowledge of the reference crystal structure inherent to indicators like (20) is a severe limitation of available methods. As we will show, this limitation can be overcome by using our proposed framework with the modulus of the 2DFFT as the local feature extractor:
(21) 
Let us assume that the window is large enough to cover at least one period of the crystal in either direction at any location. Then, the modulus of the Fourier transform automatically encodes the local stencil within the positions of Bragg reflections.
Assuming that the window size matches the period of the crystal and the unit cell is a square, i.e. the discrete signals are exactly periodic, the translation of the window across the image causes a phase shift in frequency domain. This phase shift is canceled by the absolute value in the modulus, making the feature extractor translation invariant inside crystal regions with fixed lattice parameters. Though in practice this assumption is not met, artifacts in frequency domain caused by window boundary effects are easily handled by the perimeter regularization, as long as the window size is chosen reasonably large. Furthermore, these are also reduced through the proposed dimension reduction of the fidelity term (14). Note that crystal images are usually far from periodic at the boundary ( pixels in orthogonal direction) and thus cannot be reasonably defined there. Here, we simply extend the segmentation constantly to cover the boundary region.
Figure 2 shows segmentation results obtained by the proposed method with a 2DFFT modulus based feature extractor of sizes (rows 13) and (last row). The crystals in the first three rows consist of regions differing only in crystal orientation. From visual inspection, the results for the noisefree images are exact up to interatomic distance. In the first row, despite the noisy grain (third column) begin hardly recognizable due to high noise power (Gaussian noise with a standard deviation of 100% of the maximum noisefree image intensity), the segmentation deviates little from that of the noisefree grain. Similar results are observed for the threephase scenario (second row). A lower noise power (66%) was chosen, because otherwise the small bottom grain could not be detected, likely due to its small size compared to the window size. The multiphase segmentation also works very well for five regions (third row) under the presence of very strong noise (100%). Furthermore, as seen in the bottom row of Figure 2, the proposed Fourierbased segmentation is feasible and robust to large amounts of noise (100%), even if the individual grains have entirely different crystal lattices. This is a type of material of practical relevance to material scientists that cannot be handled by the stencil based methods [4, 5, 12].
5 Conclusions
We have discussed a variational framework for multiphase image segmentation based on structural information from highdimensional local features. The framework imposes no special constraints on the used indicator functions, except that they are suitable for structure discrimination in the sense that they should be roughly homogeneous inside the structures of interest and provide some contrast across the different regions of interest. A robust initialization strategy for the segmentation algorithm was presented in this context, based on dimension reduction and decorrelation via PCA, as well as edgeness detection and clustering. Numerical results for two applications were presented. For texture segmentation, the proposed framework provides very competitive results, including stateoftheart performance on the Prague benchmark. Using the 2DFFT as feature extractor, robust and unsupervised crystal segmentation can be achieved, including segmentation of crystals with entirely different structure from extremely noisy data and without apriori information about the crystals. We would like to point out that the proposed method can also be applied directly to highdimensional data, for instance in spectroscopy.
The source code of the proposed method, including executables reproducing all presented results, is available at http://nmevenkamp.github.io/pcams.
Acknowledgments
The authors thank Paul Voyles for providing simulated STEM images of a twophase crystal.
References
 [1] L. Ambrosio, N. Fusco, and D. Pallara. Functions of bounded variation and free discontinuity problems. Oxford Mathematical Monographs. Oxford University Press, New York, 2000.
 [2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. PAMI, 33(5):898–916, 2011.
 [3] J. Barba and J. Gil. An iterative algorithm for cell segmentation using shorttime Fourier transform. J. Microsc., 184(2):127–132, 1996.
 [4] B. Berkels, A. Rätz, M. Rumpf, and A. Voigt. Extracting grain boundaries and macroscopic deformations from images on atomic scale. J. Sci. Comput., 35(1):1–23, 2008.
 [5] M. Boerdgen, B. Berkels, M. Rumpf, and D. Cremers. Convex relaxation for grain segmentation at atomic scale. In VMV, 2010.
 [6] A. Chambolle and T. Pock. A firstorder primaldual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision, 40(1):120–145, 2011.
 [7] D. Charalampidis and T. Kasparis. Waveletbased rotational invariant roughness features for texture classification and segmentation. TIP, 11(8):825–837, 2002.
 [8] H.D. Cheng, X. Jiang, Y. Sun, and J. Wang. Color image segmentation: advances and prospects. Pattern recognition, 34(12):2259–2281, 2001.
 [9] M. D. Collins, J. Xu, L. Grady, and V. Singh. Random walks based multiimage segmentation: Quasiconvexity results and GPUbased solutions. In CVPR, 2012.
 [10] C. Ding and X. He. Kmeans clustering via principal component analysis. In ICML, 2004.
 [11] S. Drabycz, R. G. Stockwell, and J. R. Mitchell. Image texture characterization using the discrete orthonormal Stransform. J. Digit. Imaging, 22(6):696–708, 2009.
 [12] M. Elsey and B. Wirth. Fast automated detection of crystal distortion and crystal defects in polycrystal images. Multiscale Modeling & Simulation, 12(1):1–24, 2014.
 [13] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graphbased image segmentation. IJCV, 59(2):167–181, 2004.
 [14] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, and S. A. Barman. Blood vessel segmentation methodologies in retinal images–a survey. Comput. Methods Programs Biomed., 108(1):407–433, 2012.
 [15] M. Haindl and S. Mikes̆. Unsupervised image segmentation contest. In ICPR, 2014.
 [16] M. Haindl, S. Mikes̆, and P. Pudil. Unsupervised hierarchical weighted multisegmenter. In J. A. Benediktsson, J. Kittler, and F. Roli, editors, Multiple Classifier Systems, volume 5519 of Lecture Notes in Computer Science, pages 272–282. Springer Berlin Heidelberg, 2009.
 [17] Y. Han, X.C. Feng, and G. Baciu. Variational and PCA based natural image segmentation. Pattern Recognition, 46(7):1971–1984, 2013.
 [18] D. E. Ilea and P. F. Whelan. Image segmentation based on the integration of colour–texture descriptors—a review. Pattern Recognition, 44(10):2479–2501, 2011.
 [19] N. Kambhatla and T. K. Leen. Dimension reduction by local principal component analysis. Neural Comput., 9(7):1493–1516, 1997.
 [20] H.P. Kriegel, P. Kröger, and A. Zimek. Clustering highdimensional data: A survey on subspace clustering, patternbased clustering, and correlation clustering. ACM Trans. Knowl. Discov. Data, 3(1):1:1–1:58, Mar. 2009.
 [21] F. Li, M. K. Ng, T. Y. Zeng, and C. Shen. A multiphase image segmentation method based on fuzzy region competition. SIAM J. Imaging Sci., 3(3):277–299, 2010.

[22]
J. Li, J. M. BioucasDias, and A. Plaza.
Spectral–spatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields.
TGRS, 50(3):809–823, 2012.  [23] X. Liu and D. Wang. Image and texture segmentation using local spectral histograms. TIP, 15(10):3066–3077, Oct 2006.
 [24] C. Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of . J. Optim. Theory Appl., 50(1):195–200, 1986.
 [25] D. Mumford and J. Shah. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math., 42(5):577–685, 1989.
 [26] B. Parker and D. D. Feng. Variational segmentation and PCA applied to dynamic PET analysis. In PanSydney Area Workshop on Visual Information Processing, 2003.
 [27] L. Parsons, E. Haque, and H. Liu. Subspace clustering for high dimensional data: A review. SIGKDD Explor. Newsl., 6(1):90–105, 2004.

[28]
T. R. Reed and J. H. DuBuf.
A review of recent texture segmentation and feature extraction techniques.
CVGIP: Image understanding, 57(3):359–372, 1993.  [29] C. Tai, X. Zhang, and Z. Shen. Wavelet frame based multiphase image segmentation. SIAM J. Imaging Sci., 6(4):2521–2546, 2013.
 [30] L. J. van der Maaten, E. O. Postma, and H. J. van den Herik. Dimensionality reduction: A comparative review. JMLR, 10(141):66–71, 2009.
 [31] S. R. Vantaram and E. Saber. Survey of contemporary trends in color image segmentation. J. Electron. Imaging, 21(4):040901–1–040901–28, 2012.
 [32] T. P. Weldon, W. E. Higgins, and D. F. Dunn. Efficient Gabor filter design for texture segmentation. Pattern Recognition, 29(12):2005–2015, 1996.
 [33] J. Yuan, D. Wang, and A. Cheriyadat. Factorizationbased texture segmentation. TIP, 24(11):3488–3497, November 2015.
 [34] C. Zach, D. Gallup, J.M. Frahm, and M. Niethammer. Fast global labeling for realtime stereo using multiple plane sweeps. In VMV, 2008.
Comments
There are no comments yet.