DeepAI
Log In Sign Up

Superpixel Segmentation Using Gaussian Mixture Model

Superpixel segmentation algorithms are to partition an image into perceptually coherence atomic regions by assigning every pixel a superpixel label. Those algorithms have been wildly used as a preprocessing step in computer vision works, as they can enormously reduce the number of entries of subsequent algorithms. In this work, we propose an alternative superpixel segmentation method based on Gaussian mixture model (GMM) by assuming that each superpixel corresponds to a Gaussian distribution, and assuming that each pixel is generated by first randomly choosing one distribution from several Gaussian distributions which are defined to be related to that pixel, and then the pixel is drawn from the selected distribution. Based on this assumption, each pixel is supposed to be drawn from a mixture of Gaussian distributions with unknown parameters (GMM). An algorithm based on expectation-maximization method is applied to estimate the unknown parameters. Once the unknown parameters are obtained, the superpixel label of a pixel is determined by a posterior probability. The success of applying GMM to superpixel segmentation depends on the two major differences between the traditional GMM-based clustering and the proposed one: data points in our model may be non-identically distributed, and we present an approach to control the shape of the estimated Gaussian functions by adjusting their covariance matrices. Our method is of linear complexity with respect to the number of pixels. The proposed algorithm is inherently parallel and can get faster speed by adding simple OpenMP directives to our implementation. According to our experiments, our algorithm outperforms the state-of-the-art superpixel algorithms in accuracy and presents a competitive performance in computational efficiency.

READ FULL TEXT VIEW PDF

page 2

page 4

page 7

page 8

page 10

page 11

12/29/2016

Quantum Clustering and Gaussian Mixtures

The mixture of Gaussian distributions, a soft version of k-means , is co...
11/12/2018

Expectation-Maximization for Adaptive Mixture Models in Graph Optimization

Non-Gaussian and multimodal distributions are an important part of many ...
09/29/2017

A Gaussian mixture model representation of endmember variability in hyperspectral unmixing

Hyperspectral unmixing while considering endmember variability is usuall...
01/25/2018

Unmixing urban hyperspectral imagery with a Gaussian mixture model on endmember variability

In this paper, we model a pixel as a linear combination of endmembers sa...
10/05/2022

GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models

Prevalent semantic segmentation solutions are, in essence, a dense discr...
02/01/2020

Foreground object segmentation in RGB-D data implemented on GPU

This paper presents a GPU implementation of two foreground object segmen...
05/25/2017

Real-Time Background Subtraction Using Adaptive Sampling and Cascade of Gaussians

Background-Foreground classification is a fundamental well-studied probl...

Code Repositories

GMMSP

Superpixel Segmentaion Using Gaussian Mixture Model


view repo

I Introduction

Partitioning image into superpixels can be used as a preprocessing step for complex computer vision tasks, such as segmentation [1, 2, 3], visual tracking [4], image matching [5, 6], etc. Sophisticated algorithms benefit from working with superpixels, instead of just pixels, because superpixels reduce input entries and enable feature computation on more meaningful regions.

Like many terminologies in computer vision, there is no rigorous mathematical definition for superpixel. The commonly accepted description of a superpixel is “a group of connected, perceptually homogeneous pixels which does not overlap any other superpixel.” For superpixel segmentation, the following properties are generally desirable.

Prop. 1. Accuracy. Superpixels should adhere well to object boundaries. Superpixels crossing object boundaries arbitrarily may lead to bad or catastrophic result for subsequent algorithms. [7, 8, 9, 10]

Prop. 2. Regularity. The shape of superpixels should be regular. Superpixels with regular shape make it easier to construct a graph for subsequent algorithms. Moreover, these superpixels are visually pleasant which is helpful for algorithm designers’ analysis. [11, 12, 13]

Prop. 3. Similar size. Superpixels should have a similar size. This property enables subsequent algorithms to deal with each superpixel without bias [14, 15, 16]. As pixels have the same “size” and the term of “superpixel” is originated from “pixel”, this property is also reasonable intuitively. This is a key property to distinguish between superpixel and other over-segmented regions.

Prop. 4. Efficiency. A superpixel algorithm should have a low complexity. Extracting superpixels effectively is critical for real-time applications. [14, 8].

Under the constraint of Prop. 3, the requirements on accuracy and regularity are to a certain extent oppositional. Intuitively, if a superpixel, with a limited size, needs to adhere well to object boundaries, the superpixel has to adjust its shape to that object which may be irregular. A satisfactory compromise between regularity and accuracy has not yet been found by existing superpixel algorithms. As four typical algorithms shown in Fig. 6LABEL:sub@fig:vc5:NC-6LABEL:sub@fig:vc5:ERS, the shape of superpixels generated by NC [17, 18] (Fig. 6LABEL:sub@fig:vc5:NC) and LRW [12] (Fig. 6LABEL:sub@fig:vc5:LRW) is more regular than that of superpixels extracted by SEEDS [8] (Fig. 6LABEL:sub@fig:vc5:SEEDS) and ERS [9] (Fig. 6LABEL:sub@fig:vc5:ERS). Nonetheless, the superpixels generated by SEEDS [8] and ERS [9] adhere object boundaries better than those of NC [17] and LRW [12]. In this work, A Gaussian mixture model (GMM) and an algorithm derived from the expectation-maximization algorithm [19] are built. It turns out the proposed method can strike a balance between regularity and accuracy. An example is displayed in Fig. 6LABEL:sub@fig:vc5:GMMSP, the compromise is that superpixels at regions with complex textures have an irregular shape to adhere object boundaries, while at homogeneous regions, the superpixels are regular.

(a)
(b)
(c)
(d)
(e)
Fig. 6: Superpixel segmentations by five algorithms: LABEL:sub@fig:vc5:GMMSP Our method, LABEL:sub@fig:vc5:NC NC [17], LABEL:sub@fig:vc5:LRW LRW [12], LABEL:sub@fig:vc5:SEEDS SEEDS [8], and LABEL:sub@fig:vc5:ERS ERS [9]. Each segmentation has approximately 200 superpixels. The second row zooms in the regions of interest defined by the white boxes in the first row. At the third row, superpixel boundaries are drawn to purely black images to highlight shapes of the superpixels.

Computational efficiency is a matter of both algorithmic complexity and implementation. Our algorithm has a linear complexity with respect to the number of pixels. As an algorithm has to read all pixels, linear time theoretically is the best time complexity for superpixel problem. Generally, algorithms can be categorized into two major groups: parallel algorithms that are able to be implemented with parallel techniques and its performance scales with the number of parallel processing units, and serial algorithms whose implementations are usually executed sequentially and only part of the system resources can be used on a parallel computer. Modern computer architectures are parallel and applications can benefit from parallel algorithms because parallel implementations generally run faster than serial implementations for the same algorithm. The proposed algorithm is inherently parallel and our serial implementation can easily achieve speedups by adding few simple OpenMP directives.

The proposed method is constructed by associating each superpixel to one Gaussian distribution; modeling each pixel with a mixture of Gaussian distributions, which are related to the given pixel; and estimating unknown parameters in the proposed mixtures via an approach modified from the expectation-maximization algorithm; The superpixel of a pixel is determined by a post probability. The proposed approach was tested on the Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500) [20]. It is shown that the proposed method outperforms state-of-the-art methods in accuracy and presents a competitive performance in computational efficiency. Our main contributions are summarized as follows:

  1. Our model is novel for superpixel segmentation, as GMM has not yet been well explored for the superpixel problem.

  2. We present a pixel-related GMM for each individual pixel, in which case pixels may be non-identically distributed, meaning that two pixels may have different GMMs.

  3. The proposed algorithm offers an option for controlling the regularity of superpixel shapes.

  4. Our algorithm is a parallel algorithm.

  5. The proposed approach give a better accuracy than state-of-the-art algorithms.

  6. Our method strike a balance between superpixel regularity and accuracy (see Fig. 6LABEL:sub@fig:vc5:GMMSP).

The rest of this paper is organized as follows. Section II presents an overview of related works on superpixel segmentation. Section III introduces the proposed method. Experiments are discussed in section IV. Finally, the paper is concluded in section V.

Ii Related works

The concept of superpixel was first introduced by Xiaofeng Ren and Jitendra Malik in 2003 [21]. During the last decades, the superpixel problem has been well studied[22, 23]. Existing superpixel algorithms extract superpixels either by optimizing superpixel boundaries, such as finding paths and evolving curves, or by grouping pixels, e.g. the most well-known SLIC [14]. We will give a brief review on how existing algorithms solve the superpixel problem in the two aspects in this section.

Optimize boundaries. Algorithms extract superpixels not by labeling pixels directly but by marking superpixel boundaries, or by only updating the label of pixels on superpixel boundary is in this category. Rohkohl et al. present a superpixel method that iteratively assigns superpixel boundaries to their most similar neighboring superpixel [24]. A superpixel is represented with a group of pixels that are randomly selected from that superpixel. The similarity between a pixel and a super-pixel is defined as the average similarities from the pixel to all the selected representatives. Aiming to extract lattice-like superpixels, or “superpixel lattices”, [13] partitions an image into superpixels by gradually adding horizontal and vertical paths in strips of a pre-computed boundary map. The paths are formed by two different methods: s-t min-cut and dynamic programming. The former finds paths by graph cuts and the latter constructs paths directly. The paths have been designed to avoid parallel paths crossing and guarantee perpendicular paths cross only once. The idea of modeling superpixel boundaries as paths (or seam carving [25]) and the use of dynamic programming were borrowed by later variations or improvements [26, 27, 28, 29, 30, 31]. In TurboPixels [16], Levinshtein et al. model the boundary of each superpixel as a closed curve. So, the connectivity is naturally guaranteed. Based on level-set evolution, the curves gradually sweep over the unlabeled pixels to form superpixels under the constraints of two velocities. In VCells [7]

, a superpixel is represented as a mean vector of color of pixels in that superpixel. With the designed distance

[7], VCells iteratively updates superpixel boundaries to their nearest neighboring superpixel. The iteration stops when there are no more pixels need to be updated. SEEDS [32, 8] exchanges superpixel boundaries using a hierarchical structure. At the first iteration, the biggest blocks on superpixel boundary are updated for a better energy. The size of pixel blocks becomes smaller and smaller as the number of iterations increases. The iteration stops after the update of boundary exchanges in pixel level. Improved from SLIC [14], [33] and [34] present more complex energy. To minimize their corresponding energy, [33] and [34] update boundary pixels instead of assigning a label for all pixels in each iteration. Based on [33], [34] adds the connectivity and superpixel size into their energy. For the pixel updating, [34] uses a hierarchical structure like SEEDS [32], while [34] exchanges labels only in pixel level. Zhu et al. propose a speedup of SLIC [14] by only moving unstable boundary pixels, the label of which changed in the previous iteration [26]. Besides, based on pre-computed line segments or edge maps of the input image, [35] and [11] extract superpixels by aligning superpixel boundaries to the lines or the edges.

Grouping pixels

. Superpixels algorithms that assign labels for all pixels in each iteration is in this category. With an affinity matrix constructed based on boundary cue

[36], the algorithm developed in [18][21], which is usually abbreviated as NC [14], uses normalized cut [17] to extract superpixels. In Quick shift (QS) [37], the pixel density is estimated on a Parzen window with a Gaussian kernel. A pixel is assigned to the same group with its parent which is the nearest pixel with a greater density and within a specified distance. QS does not guarantee connectivity, or in other words, pixels with the same label may not be connected. Veksler et al. propose an approach that distributes a number of overlapping square patches on the input image and extracts superpixels by finding a label for each pixel from patches that cover the present pixel [38]. The expansion algorithm in [39] is gradually adapted to modify pixel label within local regions with a fixed size in each iteration. A similar solution in [40] is to formulate the superpixel problem as a two-label problem and build an algorithm through grouping pixels into vertical and horizontal bands. By doing this, pixels in the same vertical and horizontal group form a superpixel. Starting from an empty graph edge set, ERS [9] sequentially adds edges to the set until the desired number of superpixels is reached. At each adding, ERS [9] takes the edge that results in the greatest increase of an objective function. The number of generated superpixels is exactly equal to the desired number. SLIC [14] is the most well-known superpixel algorithm due to its efficiency and simplicity. In SLIC [14], a pixel corresponds to a five dimensional vector including color and spatial location, and -means is employed to cluster those vectors locally, i.e. each pixel only compares with superpixels that fall into a specified spatial distance and is assigned to the nearest superpixel. Many variations follow the idea of SLIC in order to either decrease its run-time [41, 42, 43] or improve its accuracy [44, 33]. LSC [10] also uses a -means method to refine superpixels. Instead of directly using the 5D vector used in SLIC [14], LSC [10, 45] maps them to a feature space and a weighted -means is adopted to extract superpixels. Based on marker-based watershed transform, [15] and [41] incorporate spatial constraints to an image gradient in order to produce superpixels with regular shape and similar size. LRW [12] groups pixels using an improved random walk algorithm. By using texture features to optimize an initial superpixel map, this method can produce regular superpixels in regions with complex texture. However, this method suffers from a very slow speed.

Although FH [46], mean shift [47] and watersheds [48], have been refereed to as “superpixel” algorithms in the literature, they are not covered in this paper as the sizes of the regions produced by them vary enormously. This is mainly because these algorithms do not offer direct control to the size of the segmented regions. Structure-sensitive or content-sensitive superpixels in [49, 50] are also not considered to be superpixels, as they do not aim to extract regions with similar size (see Prop. 3 in section I).

A large number of superpixel algorithms have been proposed, however, few models have been presented and most of the existing energy functions are variation of the objective function of -means. In our work, we propose an alternative model to tackle the superpixel problem. With an elaborately designed algorithm, the underlying segmentation from the model is well revealed.

Iii The method

Iii-a Model

Let stands for the pixel index of an input image with its width and height in pixels. Hence, the total number of pixels of image is , and . Let denotes pixel ’s position on the image plane, where and , and denotes pixel ’s intensity or color. If color image is used, is a vector, otherwise, is a scalar. The number of elements in is ignored for now and it will be discussed later. We use vector to represent pixel .

Most existing superpixel algorithms require the desired number of superpixels as an input. However, instead of using directly, we use and as essential inputs. If is specified, and are obtained by the following equation.

(1)

If and are preferred, it is encouraged to assign the same value to the two variables. Using equation (2), the desired number of superpixels is computed when and are directly specified, or re-computed in the case when and are obtained by equation (1).

(2)

For simplicity of discussion, we assume that and . We define the superpixel set as .

Each superpixel corresponds to a Gaussian distribution with p.d.f. , where and

(3)

in which is the number of components in .

Fig. 7: Illustration of pixel set . Pixel set , , and are correspondingly surrounded with blue, red, and green rectangles in this figure.

If pixel is drawn from superpixel , we assume that pixel can be only in pixel set which is defined in equation (III-A). Fig. 7 gives an visual illustration for . The definition of is one of the key points in our method.

where

and for any given superpixel , we have

For each pixel , the possible superpixels from which pixel may be generated form a superpixel set . Let stand for the unknown superpixel label of pixel , and

are treated as random variables whose possible values are in

, . We now treat as observations of random variables

. The probability density function

of each random variables is defined as a mixture of Gaussian functions, known as Gaussian mixture model (GMM).

(4)

in which , the probability that takes value , are defined to be for , where is the number of elements in a given set. Therefore, become

(5)

Note that pixels may have different distributions when which is the most common case. This is the main difference between our GMM and the traditional GMM. The usage of results in superpixels with similar size.

Once an estimator of is found, superpixel label of pixel can be obtained by

(6)

By Bayes’ theorem, we have the posterior probability of each

,

(7)

Therefore, superpixel labels can be obtained by

(8)

Iii-B Parameter estimation

Maximum likelihood estimation is used to estimate the parameters in . Suppose that , , are independently distributed. For all observed vectors , , the logarithmic likelihood function will be

(9)

Because is constant, the value of that maximizes will be the same as the value of that maximizes

(10)

According to Jensen’s inequality, is greater than or equal to as shown below.

(11)
(12)

where , for and , and . We now use the expectation-maximization algorithm to iteratively find the value of that maximizes to approach the maximum of with two steps: the expectation step (E-step) and the maximization step (M-step).

E-step: once a guess of is given, is expected to be tightly attached to . To this end, is required to ensure . Equation (13) is a sufficient condition for Jensen’s inequality to hold the equality of inequality .

(13)

where is a constant. Since , can be eliminated and hence can be updated by equation (14) to hold the equality to be true.

(14)

M-step: in this step, is derived by maximizing with a given . To do this, we first calculate the derivatives of with respect to mean vectors and covariance matrices , and set the derivatives to zero, as shown in equations (15)-(17). Then the parameters are obtained by solving equation (17).

(15)
(16)
(17)
(18)
(19)

After initializing , the estimate of is obtained by iteratively updating and using equations (14), (18), and (19) until converges.

Iii-C Algorithm in practice

Although the estimate of in section III-B supports full covariance matrices, i.e., a covariance matrix with all its elements as shown in equation (19), only block diagonal matrices are used in this work (see equation (20)). This is because computing on block diagonal matrices is more efficient than computing on full matrices, and full matrices will also not bring better performance in accuracy.

(20)

where and respectively represent the spatial covariance matrices and the color covariance matrices for . For color images, it is encouraged to split their color covariance matrices into lower dimensional matrices to save computation. For example, if an image with CIELAB color space is inputted, it is better to put color-opponent dimensions and into a 2 by 2 covariance matrix. In this case, in equation (20) will become

(21)

However, we will keep using (20) to discuss the proposed algorithm for simplicity.

The covariance matrices will be updated according to equations (22) and (23) which are derived by replacing in equation (III-B) with the block diagonal matrices in equation (20), and by further solving (17).

(22)
(23)

where and are the spatial components of and , and and are, for grayscale images, the intensity components, or, for color image, the color components of and .

Since and are positive semi-definite in practice, they may be not invertible sometimes. To avoid this trouble, we first compute the eigendecompositions of the covariance matrices as shown in equations (24) and (25

), then eigenvalues on the major diagonals of

and are modified using equations (26) and (27), and finally and are reconstructed via the equations (28) and (29).

(24)
(25)

where and are diagonal matrices with eigenvalues on their respective major diagonals, and and are orthogonal matrices. We use and to denote the respective eigenvalues on major diagonals of and , where and . If the input image is grayscale, then we will have that , and are scalars, and .

(26)
(27)

where and are two constants. Although this two constants are originally designed to prevent covariance matrices from being singular, they also give an opportunity to control regularity of the generated superpixels by weighing the relative importance between spatial proximity and color similarity. For instance, a larger produces more regular superpixels, and the opposite is true for a smaller . As and are opposite to each other, we set and leave for detailed description in section IV.

(28)
(29)

where and are diagonal matrices with and on their respective major diagonals.

In the proposed algorithm, are initialized using center pixels over the input image uniformly at fixed horizontal and vertical intervals and , i.e. , where

(30)

We initialize with so that neighboring superpixels can be well overlapped at the beginning. The initialization of is not very straightforward, the basic idea is to set their main diagonal equal to the square of a small color distance with which two pixels are perceptually uniform. The effect of different values for will be discussed in section IV.

Once parameter is initialized, it will finally be estimated by iteratively updating (14), (18), (28), and (29) until converges. As a preprocessing step to subsequent applications, superpixel algorithm should run as fast as possible. We have found that iterating 10 times is sufficient for most images without checking convergence, and we will use this iteration number for all our experiments and will denote it with to avoid confusion.

As the connectivity of superpixels cannot be guaranteed, a postprocessing step is required to enforce connectivity of the generated superpixels. This is done by sorting the isolated superpixels in ascending order according to their sizes, and sequentially merging small isolated superpixels, which are less than one fourth of the desired superpixel size, to their nearest neighboring superpixels, with only intensity or color being taken into account. Once an isolated superpixel (source) is merged to another superpixel (destination), the size of the source superpixel is cleared to zero, and the size of the destination superpixel will be updated by adding the size of the source superpixel. This size updating trick will prevent the size of the produced superpixels from significantly varying.

The proposed algorithm is summarized in Algorithm 1.

0:   and , or ; .
0:  , .
1:  Initialize parameter .
2:  Update using equation (14), and set .
3:  while  do
4:     Update using equation (18).
5:     Update and using equations (28) and (29).
6:     Update using equation (14), and set .
7:  end while
8:   are determined by equation (8).
9:  Postprocessing for connectivity enforcement.
Algorithm 1 The proposed superpixel algorithm.

Iii-D Analysis on the proposed method

As the frequency of a single processor is difficult to improve, modern processors are designed using parallel architectures. If an algorithm is able to be implemented with parallel techniques, its performance generally scales with the number of parallel processing units and its computational efficiency can be significantly improved on multi-core or on many-core systems. Fortunately, the most expensive part of our algorithm, namely the iteration of updating of and , can be parallelly executed as each can be updated independently, and so do and . In our experiments, we will show that our C++ implementation is easy to get speedup on multi-core CPUs with only few OpenMP directives inserted.

By the definition of , we have for . Therefore, the updating of has a complexity of . Because we use as a constant in the proposed algorithm, the complexity of is . By the definition of , we have . Based on equations (18), (22), and (23), the complexity of updating is . Since , the updating of has a complexity of . In the worst case, the sorting procedure in the postprocessing step requires operations, where is the number of isolated superpixels. The merging step needs operations, where is the number of small isolated superpixels and represents the average number of their adjacent neighbors. In practice, , the operations required for the postprocessing step can be ignored. Therefore, the proposed superpixel algorithm is of a linear complexity .

(a)
(b)
(c)
Fig. 11: Effect of different . Experiments are performed on BSDS500 to generate different number of superpixels by adjusting or and , and results are averaged over 500 images. The results of BR, UE, and ASA are correspondingly plotted in LABEL:sub@fig:lbr, LABEL:sub@fig:lue, and LABEL:sub@fig:las. In order to see more details, part of the results are zoomed in. (better see in color)
(a)
(b)
(c)
(d)
(e)
Fig. 17: visual results with LABEL:sub@fig:lambda2 , LABEL:sub@fig:lambda4 , LABEL:sub@fig:lambda6 , LABEL:sub@fig:lambda8 , and LABEL:sub@fig:lambda10 . The test image is from BSDS500 and approximately 400 superpixels are extracted in each image.

Iv Experiment

In this section, algorithms are evaluated in terms of accuracy, computational efficiency, and visual effects. Like many state-of-the-art superpixel algorithms, we also use CIELAB color space for our experiments because it is perceptually uniform for small color distance.

Accuracy: three commonly used metrics are adopted: boundary recall (BR), under-segmentation error (UE), and achievable segmentation accuracy (ASA). To assess the performance of the selected algorithms, experiments are conducted on the Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500) which is an extension of BSDS300. These two data sets have been wildly used in superpixel algorithms. BSDS500 contains 500 images, and each one of them has the size of 481321 or 321481 with at least four ground-truth human annotations.

  1. BR measures the percentage of ground-truth boundaries correctly recovered by the superpixel boundary pixels. A true boundary pixel is considered to be correctly recovered if it falls within two pixels from at least one superpixel boundary. A high BR indicates that very few true boundaries are missed.

  2. A superpixel should not cross ground-truth boundary, or, in other words, it should not cover more than one object. To quantify this notion, UE calculates the percentage of superpixels that have pixels “leak” from their covered object as shown in equation (31).

    (31)

    where and are pixel sets of superpixel and ground-truth segment . is generally accepted.

  3. If we assign every superpixel with the label of a ground-truth segment into which the most pixels of the superpixel fall, how much segmentation accuracy can we achieve, or how many pixels are correctly segmented? ASA is designed to answer this question. Its formula is defined in equation (32) in which is the set of ground-truth segments.

    (32)

Computational efficiency: execution time is used to quantify this property.

(a)
(b)
(c)
Fig. 21: Results with different . Experiments are performed on BSDS500 to generate different number of superpixels by adjusting or and , and results are averaged over 500 images. The results of BR, UE, and ASA are correspondingly plotted in LABEL:sub@fig:ecbr, LABEL:sub@fig:ecue, and LABEL:sub@fig:ecas. In order to see more details, part of the results are zoomed in. (better see in colour)
(a)
(b)
(c)
(d)
(e)
Fig. 27: visual results with LABEL:sub@fig:ec2 , LABEL:sub@fig:ec4 , LABEL:sub@fig:ec6 , LABEL:sub@fig:ec8 , and LABEL:sub@fig:ec10 . The test image is from BSDS500 and approximately 400 superpixels are extracted in each image. The second row is enlarged from the rectangular marked in the first row.

Iv-a Effect of and

As shown in Fig. 11, there is no obvious regularity for the effect of . In Fig. 11, the maximum difference between two lines is around 0.0010.006 which is very small. Although it seems that small will lead to a better BR result, it is not true for UE and ASA. For instance, in the enlarged region of Fig. (b)b, the result of is slightly better than . Visual results with different are plotted in Fig. 17, it is hard for human to distinguish the difference among the five results.

can be used to control the regularity of the generated superpixels. As shown in Fig. 21, small difference of does not present obvious variation for UE and ASA, but it does affect the results of BR. In other words, a small variation of affects the boundary of the produced superpixels much more than the content of the produced superpixels. Generally, a larger leads to more regular superpixels whose boundary is more smooth. Conversely, the shape of superpixels generated with a smaller is relative irregular (see Fig. 27). Because superpixels with irregular shape will produce more boundary pixels, the result of BR with small is better than that with greater .

We will use and in the following experiments. Although this setting does not give the best performance in accuracy, the shape of superpixels using this setting is regular and visually pleasant (see Fig. 27LABEL:sub@fig:ec8). Moreover, it is enough to outperform state-of-the-art algorithms as shown in Fig. 31.

Iv-B Parallel scalability

In order to evaluate scalability for the number of processors, we test our implementation on an machine attached with an Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz and 8 GB RAM. The source code is not optimized for any specific architecture. Only two OpenMP directives are added for the updating of , , and , as they can be computed independently (see section III-D). As listed in Table I, for a given image, multiple cores will present a better performance.

Resolution 1 core 2 cores 4 cores 6 cores
240320 393.646 303.821 227.078 200.708
320480 776.586 589.785 400.073 321.548
480640 1569.74 1011.62 743.629 624.561
640960 3186.71 2244.12 1353.72 1069.79
TABLE I: run-time (ms) of our implementation on different images with various resolution. The program is executed using 1, 2, 4, and 6 cores.

Iv-C Comparison with state-of-the-art algorithms

We compare the proposed algorithm to eight state-of-the-art superpixel segmentation algorithms including LSC111http://jschenthu.weebly.com/projects.html [10], SLIC222http://ivrl.epfl.ch/research/superpixels [14], SEEDS333http://www.mvdblive.org/seeds/ [8], ERS444https://github.com/mingyuliutw/ers [9], TurboPixels555http://www.cs.toronto.edu/ babalex/research.html [16], LRW666https://github.com/shenjianbing/lrw14 [12], VCells777http://www-personal.umich.edu/ jwangumi/software.html [7], and Waterpixels888http://cmm.ensmp.fr/ machairas/waterpixels.html [15]. The results of the eight algorithms are all generated from implementations provided by the authors on their respective websites with their default parameters except for the desired number of superpixels, which is decided by users.

As shown in Fig. 31, our method outperforms the selected state-of-the-art algorithms especially for UE and ASA. It is not easy to distinguish between our result and LSC in Fig. 31LABEL:sub@fig:abr. However, if we use , our result will obviously outperforms LSC as displayed in Fig. 32.

To compare the run-time of the selected algorithms, we test them on a desktop machine equipped with an Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz and 8 GB RAM. The results are plotted in Fig. 35. According to Fig. 35LABEL:sub@fig:t4, as the size of the input image increases, run-time of our algorithm grows linearly, which proves our algorithm is of linear complexity experimentally.

A visual comparison is displayed in Fig. 40. According to the zooms, only the proposed algorithm can correctly reveal the segmentations. Our superpixel boundaries can adhere object very well. LSC gives a really competitive result, however there are still parts of the objects being under-segmented. The superpixels extracted by SEEDS and ERS are very irregular and their sizes vary tremendously. The remaining five algorithms can generate regular superpixels, but they adhere object boundaries poorly.

(a)
(b)
(c)
Fig. 31: Comparison with state-of-the-art algorithms. Experiments are performed on BSDS500 to generate different number of superpixels by adjusting the desired number of superpixels, and results are averaged over 500 images. The results of BR, UE, and ASA are correspondingly plotted in LABEL:sub@fig:abr, LABEL:sub@fig:aue, and LABEL:sub@fig:aas.
Fig. 32: Comparison of BR between LSC and our method. Without changing the default value of other parameters in our method, we use in this figure.
(a)
(b)
Fig. 35: Comparison of run-time. Seven algorithms are compared in LABEL:sub@fig:t7. In order to see more details, the rum-time of the fastest four algorithms is plotted in LABEL:sub@fig:t4. LRW is not included in the two figures due to its slow speed.
(a)
(b)
(c)
(d)
Fig. 40: Visual comparison. The test image is selected from BSDS500. Each algorithm extracts approximately 200 superpuxels. For each segmentation, four parts are enlarged to display more details.

V Conclusion

This paper presents an alternative method for superpixel segmentation by associating each superpixel to a Gaussian distribution with unknown parameters; then constructing a Gaussian mixture model for each pixel; and finally the superpixel label of a pixel is determined by a posterior probability after that the unknown parameters are estimated by the proposed algorithm derived from the expectation-maximization method. The main difference between the traditional GMM method and the proposed one is that data points in our model are not assumed to be identically distributed. Another important contribution is the application of eigendecomposition used in the updating of covariance matrices.

The proposed algorithm is of linear complexity, which has been proved by both theoretical analysis and experimental results. What’s more, it can be implemented using parallel techniques, and its run-time scales with the number of processors. The comparison with the state-of-the-art algorithms shows that the proposed algorithm outperforms the selected methods in accuracy and presents a competitive performance in computational efficiency.

As a contribution to open source society, we will make our test code public available at https://github.com/ahban.

References

  • [1] Z. Li, X.-M. Wu, and S.-F. Chang, “Segmentation using superpixels: A bipartite graph partitioning approach,” in CVPR, 2012, pp. 789–796.
  • [2] Z. Lu, Z. Fu, T. Xiang, P. Han, L. Wang, and X. Gao, “Learning from weak and noisy labels for semantic segmentation,” TPAMI, vol. 39, no. 3, pp. 486–500, 2017.
  • [3] M. Gong, Y. Qian, and L. Cheng, “Integrated foreground segmentation and boundary matting for live videos,” TIP, vol. 24, no. 4, pp. 1356–1370, 2015.
  • [4] F. Yang, H. Lu, and M.-H. Yang, “Robust superpixel tracking,” TIP, vol. 23, no. 4, pp. 1639–1651, 2014.
  • [5] F. Cheng, H. Zhang, M. Sun, and D. Yuan, “Cross-trees, edge and superpixel priors-based cost aggregation for stereo matching,” PR, vol. 48, no. 7, pp. 2269 – 2278, 2015.
  • [6]

    J. Ma, H. Zhou, J. Zhao, Y. Gao, J. Jiang, and J. Tian, “Robust feature matching for remote sensing image registration via locally linear transforming,”

    TGRS, vol. 53, no. 12, pp. 6469–6481, 2015.
  • [7] J. Wang and X. Wang, “VCells: Simple and efficient superpixels using edge-weighted centroidal voronoi tessellations,” TPAMI, vol. 34, no. 6, pp. 1241–1247, 2012.
  • [8] M. Van den Bergh, X. Boix, G. Roig, and L. Van Gool, “SEEDS: Superpixels extracted via energy-driven sampling,” IJCV, vol. 111, no. 3, pp. 298–314, 2015.
  • [9] M.-Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa, “Entropy rate superpixel segmentation,” in CVPR, 2011, pp. 2097–2104.
  • [10]

    Z. Li and J. Chen, “Superpixel segmentation using linear spectral clustering,” in

    CVPR, 2015, pp. 1356–1363.
  • [11] L. Duan and F. Lafarge, “Image partitioning into convex polygons,” in CVPR, 2015, pp. 3119–3127.
  • [12] J. Shen, Y. Du, W. Wang, and X. Li, “Lazy random walks for superpixel segmentation,” TIP, vol. 23, no. 4, pp. 1451–1462, 2014.
  • [13] A. P. Moore, S. Prince, J. Warrell, U. Mohammed, and G. Jones, “Superpixel lattices,” in CVPR, 2008, pp. 1–8.
  • [14] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” TPAMI, vol. 34, no. 11, pp. 2274–2282, 2012.
  • [15] V. Machairas, M. Faessel, D. Cardenas-Pena, T. Chabardes, T. Walter, and E. Decenciere, “Waterpixels,” TIP, vol. 24, no. 11, pp. 3707–3716, 2015.
  • [16] A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi, “TurboPixels: Fast superpixels using geometric flows,” TPAMI, vol. 31, no. 12, pp. 2290–2297, 2009.
  • [17] J. Shi and J. Malik, “Normalized cuts and image segmentation,” TPAMI, vol. 22, no. 8, pp. 888–905, 2000.
  • [18] G. Mori, “Guiding model search using segmentation,” in ICCV, 2005, pp. 1417–1423.
  • [19] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the em algorithm,” Journal of the royal statistical society. Series B (methodological), pp. 1–38, 1977.
  • [20] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” TPAMI, vol. 33, no. 5, pp. 898–916, 2011.
  • [21] X. Ren and J. Malik, “Learning a classification model for segmentation,” in ICCV, 2003, pp. 10–17.
  • [22] J. Peng, J. Shen, A. Yao, and X. Li, “Superpixel optimization using higher order energy,” TCSVT, vol. 26, no. 5, pp. 917–927, 2016.
  • [23] Y. Zhang, X. Li, X. Gao, and C. Zhang, “A simple algorithm of superpixel segmentation with boundary constraint,” TCSVT, vol. PP, no. 99, pp. 1–1, 2016.
  • [24] C. Rohkohl and K. Engel, “Efficient image segmentation using pairwise pixel similarities,” in

    Joint Pattern Recognition Symposium

    , 2007, pp. 254–263.
  • [25] S. Avidan and A. Shamir, “Seam carving for content-aware image resizing,” TOG, vol. 26, no. 3, Jul. 2007.
  • [26] S. Zhu, D. Cao, S. Jiang, Y. Wu, and P. Hu, “Fast superpixel segmentation by iterative edge refinement,” EL, vol. 51, no. 3, pp. 230–232, 2015.
  • [27] A. P. Moore, S. J. Prince, and J. Warrell, ““lattice cut”-constructing superpixels using layer constraints,” in CVPR, 2010, pp. 2117–2124.
  • [28] H. Fu, X. Cao, D. Tang, Y. Han, and D. Xu, “Regularity preserved superpixels and supervoxels,” TMM, vol. 16, no. 4, pp. 1165–1175, 2014.
  • [29] D. Tang, H. Fu, and X. Cao, “Topology preserved regular superpixel,” in ICME, 2012, pp. 765–768.
  • [30] P. Siva and A. Wong, “Grid seams: A fast superpixel algorithm for real-time applications,” in CRV, 2014, pp. 127–134.
  • [31] P. Siva, C. Scharfenberger, I. B. Daya, A. Mishra, and A. Wong, “Return of grid seams: A superpixel algorithm using discontinuous multi-functional energy seam carving,” in ICIP, 2015, pp. 1334–1338.
  • [32] M. Van den Bergh, X. Boix, G. Roig, B. de Capitani, and L. Van Gool, “SEEDS: Superpixels extracted via energy-driven sampling,” in ECCV, 2012, pp. 13–26.
  • [33] K. Yamaguchi, D. McAllester, and R. Urtasun, “Efficient joint segmentation, occlusion labeling, stereo and flow estimation,” in ECCV, 2014, pp. 756–771.
  • [34] J. Yao, M. Boben, S. Fidler, and R. Urtasun, “Real-time coarse-to-fine topologically preserving segmentation,” in CVPR, 2015, pp. 2947–2955.
  • [35] L. Li, J. Yao, J. Tu, X. Lu, K. Li, and Y. Liu, “Edge-based split-and-merge superpixel segmentation,” in ICIA, 2015, pp. 970–975.
  • [36] D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” TPAMI, vol. 26, no. 5, pp. 530–549, 2004.
  • [37] A. Vedaldi and S. Soatto, “Quick shift and kernel methods for mode seeking,” in ECCV, 2008, pp. 705–718.
  • [38] O. Veksler, Y. Boykov, and P. Mehrani, “Superpixels and supervoxels in an energy optimization framework,” in ECCV, 2010, pp. 211–224.
  • [39] Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” TPAMI, vol. 23, no. 11, pp. 1222–1239, 2001.
  • [40] Y. Zhang, R. Hartley, J. Mashford, and S. Burn, “Superpixels via pseudo-boolean optimization,” in ICCV, 2011, pp. 1387–1394.
  • [41] P. Neubert and P. Protzel, “Compact watershed and preemptive slic: On improving trade-offs of superpixel segmentation algorithms.” in ICPR, 2014, pp. 996–1001.
  • [42] Y. Kesavan and A. Ramanan, “One-pass clustering superpixels,” in ICIAfS, 2014, pp. 1–5.
  • [43] C. Y. Ren, V. A. Prisacariu, and I. D. Reid, “gSLICr: SLIC superpixels at over 250hz,” ArXiv e-prints, 2015.
  • [44] S. Jia, S. Geng, Y. Gu, J. Yang, P. Shi, and Y. Qiao, “NSLIC: SLIC superpixels based on nonstationarity measure,” in ICIP, 2015, pp. 4738–4742.
  • [45] Z. Ban, J. Liu, and J. Fouriaux, “GLSC: LSC superpixels at over 130 fps,” JRTIP, pp. 1–12, 2016.
  • [46] P. Felzenszwalb and D. Huttenlocher, “Efficient graph-based image segmentation,” IJCV, vol. 59, no. 2, pp. 167–181, 2004.
  • [47] D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature space analysis,” TPAMI, vol. 24, no. 5, pp. 603–619, 2002.
  • [48] L. Vincent and P. Soille, “Watersheds in digital spaces: an efficient algorithm based on immersion simulations,” TPAMI, vol. 13, no. 6, pp. 583–598, 1991.
  • [49] Y.-J. Liu, C.-C. Yu, M.-J. Yu, and Y. He, “Manifold slic: A fast method to compute content-sensitive superpixels,” in CVPR, 2016, pp. 651–659.
  • [50] P. Wang, G. Zeng, R. Gan, J. Wang, and H. Zha, “Structure-sensitive superpixels via geodesic distance,” IJCV, vol. 103, no. 1, pp. 1–21, 2013.