Dynamic Spectral Residual Superpixels

10/10/2019 ∙ by Jianchao Zhang, et al. ∙ University of Cambridge City University of Hong Kong 11

We consider the problem of segmenting an image into superpixels in the context of k-means clustering, in which we wish to decompose an image into local, homogeneous regions corresponding to the underlying objects. Our novel approach builds upon the widely used Simple Linear Iterative Clustering (SLIC), and incorporate a measure of objects' structure based on the spectral residual of an image. Based on this combination, we propose a modified initialisation scheme and search metric, which helps keeps fine-details. This combination leads to better adherence to object boundaries, while preventing unnecessary segmentation of large, uniform areas, while remaining computationally tractable in comparison to other methods. We demonstrate through numerical and visual experiments that our approach outperforms the state-of-the-art techniques.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image segmentation has been a widely explored task in computer vision yet a still open problem. In particular, superpixels segmentation has become a pre-processing tool for several applications including classification 

[23, 25], optical flow [18, 15], colour transfer [8]

, depth estimation 

[14, 4] and tracking [33, 35]. The central idea of superpixels is to split a given image in multiple clusters, which reflect semantically meaningful regions.

There are several advantages of using superpixel representation instead of working at pixel-wise level. Firstly, an application becomes computationally and representationally efficient as the number of primitives are significantly reduced. Secondly, the natural redundancy in an image is exploited, and therefore, features can be extracted on representative regions whilst reducing noise and increasing discriminative information [22, 1, 27].

Since the pioneering work of Ren and Malik [22], the community has devoted to develop different algorithmic approaches to improve over [22]. These approaches can be roughly divided in: graph-based methods e.g. [22, 10], path-based approaches e.g. [28], density-based models e.g. [30], contour models e.g. [11] and clustering methods e.g. [1, 17].

Out of all of the approaches reported in the literature, the Simple Linear Iterative Clustering (SLIC) [1] is perhaps the most popular method that offers a good performance whilst demanding low computational cost, by building on Lloyd’s algorithm [16] for -means. The central idea of SLIC is to perform the superpixels partition based on an iterative scheme that search for similarities between points ensuring at each step that we assign points to the nearest cluster from the previous step.

The ability of SLIC to obtain a good segmentation with low computational cost comes from the observation that, by using a similarity metric, one can greatly reduce the number of distance calculations required. However, SLIC is also limited by its own construction, and in particular, by its search range, and one can thus observe two major limitations. Firstly, SLIC tends to segment large uniform regions in an image with more superpixels than are intuitively necessary, which limits resolution in other parts of the image. Secondly, in structure-rich parts of the image, the final superpixel size is much smaller than the search radius of SLIC, which leads to many unnecessary distance computation. Finally, since we expect structure-rich parts of the image to have a higher density of superpixels, it may be efficient to perform the initial seeding of cluster centres in anticipation of this inhomogeneity.

In this work, we propose a new algorithmic approach, exhibited in Fig. Dynamic Spectral Residual Superpixels, that improves upon the SLIC approach, motivated by the drawbacks discussed above. We show that our approach outperforms SLIC and several works on the body of literature. Our main contributions are as follows.

  • We propose a new superpixel approach, which incorporates the saliency function of Hou et al. [9] as a proxy for object density. This leads to the following advantages.

    • By incorporating the saliency into the distance computation, we can prevent unnecessary over-segmentation of large, uniform regions, such as the sky in the first example of Fig. 1, and allowing greater focus on structure-rich parts of the image.

    • We propose a new seeding strategy, based on the inhomogenity described by . This allows for greater resolution changes at fewer iterations by focusing on relevant structures, and hence keeping fine-details of the structures in the final segmentation.

  • We extensively evaluate our approach with a large range of numerical and visual experiments.

  • We demonstrate that our two major contributions mitigates the major drawbacks of the state-of-the-art techniques, by reporting the lowest undersegmentation error and highest boundary recall.

2 Related Work

In this section, we review the body of literature in turn. We then highlight the advantages of clustering based methods, and their current drawbacks that motivate our new algorithmic approach.

There have been different attempts in the literature to improve superpixels segmentation. A set of approaches tackle the problem using graph representation of the image and the partition is based on the similarity of the nodes, e.g. colour, including [22, 24, 26, 6, 19, 34]. However, although promising results are reported, the computational time is often very high. Another perspective has been followed by local mode-seeking algorithms including the well-known Quick Shift, which partition is based on an approximation of kernelised mean-shift [30]. However, there is not control on the number of superpixels or compactness.

Another set of approaches addressed the superpixel partition problem as the task of finding the shortest path between seeds, for instance using the the well-known Dijkstra algorithm, as reported in [28, 7]. However, this type of approach is usually unable to control the compactness. We briefly mention other methods for superpixels segmentation. A body of work has proposed algorithms for image segmentation based on gradient ascent and other geometric methods [5, 31, 11]. For an extensive review of the literature, we refer the reader to  [27].

In particular, in this work we focus on, probably, the most popular superpixel category, which is clustering based approaches. The basis of this perspective builds on Lloyd’s algorithm 

[16] for -means clustering. The main idea of this algorithm is to partition a set of observations into clusters, in which each observation is assigned to the cluster with the nearest mean, and produces excellent results at the cost of high computational intensity. Within this category, one can find a top reference approach called Simple Linear Iterative Clustering (SLIC). SLIC was proposed by Achanta et al. [1], in which authors propose a local version of the Lloyd’s algorithm, which is computationally much simpler while keeping excellent segmentation quality.

Following this philosophy, different algorithmic approaches have been proposed including [32, 21, 20, 13]. Most recently, in [2] authors proposed an improved version of SLIC, in which they proposed to compute a polygonal partition to adapt better to the geometry of the objects in the image. Maierhofer et al [17] propose a dynamic refinement of this method, called dSLIC, which seeds the initial cluster points inhomogenously and allows the search radius to vary across clusters, both according to a measure of local object density. This allows better capturing of fine details in structure-rich regions, and further reduces computational complexity by eliminating unnecessary searches.

Let us also mention the closely related problem of salient object detection. In this problem, one has the simpler goal of identifying which regions in an image contain salient or novel information, and which contain only patterns and structures repeated throughout the image. This problem shares some similarities with the problem of image segmentation; for instance, one might hope that the salient objects are identified as superpixel regions. A hugely successful method in this problem, based on Fourier analysis, was proposed by Hou et al. [9], which inspires our current approach. More recent works include techniques based on graphs [34]

or machine-learning

[12, 29].

3 Proposed Approach

In this section, we describe in detail our superpixel approach. Firstly, we formalise the definition of superpixels in terms of a clustering task. We then introduce the details of both our new measure of structure function and our initialisation strategy.

We view an input image, of width A and height B, as a map , where is a rectangular domain, and appropriate colour domain. We also define a metric on , representing the similarity of points in space and with different colour values, and a feature map , which takes a subset and returns a pair in -means clustering now seeks a partition of into path-connected sets such that, for each , is exactly the set of points where the infimum is attained at

Figure 1: Illustration output of our approach against SLIC. SLIC tends to over-segment uniform areas with more superpixels than necessary, such as the sky in the first image, and fails to preserve fine-structures, such as the owl’s eyes and the roller coaster at the zoom-in views.

3.1 Object Density Measure via Spectral Residual

The key strength of dSLIC [17] over SLIC is the recognition that objects in an image are not distributed uniformly, and that image segmentation can exploit this to improve segmentation results and computational efficiency. Our approach is to exploit this same principle further, and use the strength of the Spectral Residual approach proposed by [9] as a better measure of object detection.

We briefly review the Fourier analysis leading to the definition of the spectral residual in [9]. For an image , we write

for the Fourier transform, which is a matrix of the same dimensions as

, and whose arguments we will write as two-dimensional frequencies . The log-spectrum of an image is then given by

(1)

where denotes the real part; we also write for the imaginary part, or phase spectrum. The key insight of [9] is that much of the information contained within is redundant, because is, to a good approximation, locally linear. These features are then encapsulated in the local average , where is the matrix consisting entirely of , and the residual log-spectrum, corresponding to the salient features, is given by

(2)

The final saliency map, which we take as our measure of object density, is then given by recombining with the phase spectrum, inverting the Fourier transform and adjusting the resulting function. Therefore, our proposed function reads:

(3)

where the squaring ensures that the quantity considered is nonnegative, and the convolution with a Gaussian kernel ensures that the final result is smooth. For our purposes, we found that in practice is an excellent value. We then set a rescaling step for the search radius using (3) as , where denotes the mean of the structure function on the image grid. We then propose to have the distance computations depending on our function, which reads:

1#pseudocode for  computing distances
2input  (compactness), number of superpixels
3while    outset
4    for  :
5        if :
6            #distance computation
7            Compute: 
8        else:
9            
10
11Compute Residual Error 
12Increase t=t+1

By incorporating our proposed function, which we use as a measure of object density, into the distance computation one obtains have two major advantages. Firstly, by doing a dynamic adjustment of the search range based on our function , one can connect uniform regions, and so avoid segmenting the images into unnecessary small superpixels. This effect is illustrated in Fig. 1, for example, at the second column where the our approach was able to keep the sky in a same region, and the yellow car. Secondly, our approach focuses on segmenting fine details by capturing relevant structures; this is visible in the owl’s eyes and head in Fig. 1.

We now turn to explain our second major modification, which concerns the seeding initialisation.

Figure 2: Illustration of our initialisation strategy, which incorporates the object density measure , for two initial seeds. From left to right, first seed and second one.

3.2 Seeding Initialisation: A New Strategy

In this section, we describe our new seeding initialisation. Our main motivation is that we can use the object density measure defined above to help seed clusters in object-rich parts of the image, which we expect to contain more distinct regions. In this way, we obtain greater resolution at fewer iterations, and improve the focus on relevant and interesting regions.

We remark to the reader that SLIC initialisation is based on sampling pixels at the image grid. For comparison purposes, we start by defining the SLIC initilisation, which reads as follows.

1#pseudocode for seeding initialisation SLIC
2Set: Initialise cluster centers as
3 by sampling at regular grid
4step: 
5#where N is the size of the image
6Move cluster centers to the lowest gradient
7position in a  neighborhood

Our proposed approach, which incorporates into this seeding, can be described informally as follows. We first set as an initial point the pixel with the lowest value in , and then we increase the values near to the initial point such that its neighbours are unlikely to be selected as another initial point. In this way, we guarantee that the distance between seed points is comparable to the search range, which will help reduce redundant searches. This process is illustrated in Fig. 2 for two initial seeds.

Figure 3: Visual comparison of the seeding initialisation of SLIC vs ours.(a) One can see that we seek to focus on relevant areas (i.e. other than background). (b) The effect of in our seeding strategy.

The hedging described above is carried out in two stages as follows.

  • Points adjacent to the initial point are made unselectable, by setting the value of at these points excessively large.

  • Points in the proximity of the initial point are made less likely, but not impossible, to select, again by altering . The influence range and to what extent are under consideration.

The advantage of these changes is that the density of area is limited twice compared with original method. The overall procedure of our method, which suitably sets the initialisation points according to our structure measure , is described formally as follows.

1#pseudocode for seeding initialisation OURS
2Set 
3While Enough Seeds:
4    Set range=sqrt(NumOfPixels/NumOfSuperpixels)
5    for each Superpixel center :
6        Initialise  with the lowest value
7        in 
8        Set the adjacent neighbours of 
9        
10        for all points  with 
11            
12            Smooth region with , 
Figure 4: From left to right. Quantitative comparison of our approach vs SLIC using three metrics: UE, BR and BP. Our approach reported the best scores metric-wise. Four visual outputs comparisons of SLIC vs our approach. In a closer look, one can see that our approach achieves better connection of structures and keeps fine-details. For example, see (A), (C) and (D) the faces and (B) the hand.

An output example is displayed in Fig. 3. Subfigure (a) shows a initialisation comparison between our approach and SLIC, and we see that our approach gives more importance to the ostrich than the background. In subfigure (b), we evaluate possible choices for , and display outputs for . In practise, we found that the works for a range of images.

4 Experimental Results

In this section, we describe in detail the experiments that we ran to evaluate our approach.

4.1 Evaluation Protocol

Dataset Description. We evaluate our proposed approach on a publicly available dataset, the Berkeley Segmentation Dataset [3], which provides ground truth of the images for quantitative analysis.

Comparison Methodology. We compare our approach to the SOTA methods on superpixels. For this, we design a two-part evaluation scheme. For the first part, we compared our approach against SLIC [1]; this comparison therefore demonstrate that our carefully design solution achieves better performance than the top reference in clustering-based methods. For the second part, we compared to state-of-the-art techniques: QS [30], TP [11] ,TPS [28], LRW [24], SNIC [2] and dSLIC [17]. We compare our approach qualitatively by visual comparisons and quantitatively by computing three metrics: Under-segmentation Error (UE), Boundary Recall (BR) and Boundary Precision (BP). See Supplementary Material, Section 3, for the explicit definition of these metrics

Parameter Selection. For all compared approaches QS [30], TP [11] ,TPS [28], SLIC [1], LRW [24], SNIC [2] and dSLIC [17], we set the parameters as suggested in the corresponding work. We also used the codes realease from each corresponding author. For our approach, we set the since it offers a good trade-off between shape uniformity and boundary adherence (see Supplementary Material Section 2 for further description on ). We performed the evaluation using up to a range of number of superpixels up to 600.

The experiments reported in this section were under the same conditions in a CPU-based implementation. We used an Intel Core i7 with 16GB RAM.

4.2 Results and Discussion

We divide this section in two parts, following the comparison methodology scheme presented in previous section.

Figure 5: Superpixel outputs comparisons of our approach vs different methods from the body of literature: QS [30], SLIC [1] TP [11], TPS [28], LRW [24], SNIC [2] and dSLIC [17]. A closer inspection, one can see that our approach offers better superpixels segmentation. For example, (A), (B), and (C) the eyes; (D) the ostrich’s boundary and (E) the eyes and basket.
Figure 6: Superpixel segmentation outputs of our approach vs QS [30], SLIC [1] TP [11], TPS [28], LRW [24], SNIC [2] and dSLIC [17]. Visual assessment shows that the proposed algorithm performs better than the compared approaches. Examples are: (F) the leaves; (G) and (I) the face; (H) the house boundaries and (J) the hand.

Is our Approach better than SLIC? As SLIC approach remains a top reference, and is the basis of our approach, we start by evaluating our approach against it. Results are displayed in Fig. 4. In a closer look, at the right side, of this figure, one can see that our approach yields to a better segmentation of the structures, keeping fine details of the objects. Moreover, it avoids unnecessary oversegmentation on uniform areas. These positive properties of our approach can be seen, for example, in (B) the proper recovery of the hand; in (C) the hair, eyebrows and the lines patterns in the jumper that are correctly clustered; in (D) where our approach successfully capture the eyes and moustache, and in (A) with better preservation of the face structure including the nose and lips.

To further support of our visual results, we ran a quantitative analysis based on three metrics UE, BR and BP. The results are displayed at the left side of Fig. 4. The top part shows a comparison in terms of UE, where the results reflect conformity to the true boundaries. We can observe that our approach achieves the lowest UE for all superpixels counts. The same positive effect was found in terms of precision-versus recall, in which our approach displayed the best performance. This improvement is translated to our approach to be the best in terms of producing superpixels that respect the object boundaries.

Figure 7: Metric-wise comparison of our approach vs SOTA techniques using UE, BR and BP. In a closer look, we can see that our approach, overall, offers the lowest UE and the highest BR. Finally, the good boundary adherence to the true edges is reflected in the last plot, in which our approach overall gets the best trade-off between those metrics.
Figure 8: CPU time averaged comparison of our approach vs the body of literature. One can see that our approach improvement comes at a negligible cost in runtime in comparison with the fastest approaches SLIC and SNIC

Is our approach better than Other Superpixel approaches? As the second part of our evaluation, we compare our approach against SOTA models: QS [30], TP [11], TPS [28], LRW [24], SNIC [2] and dSLIC [17]. We selected for our comparison approaches coming from different perspectives: graph-based, path-based, density-based and clustering-based approaches. Results are displayed in Figs.  5, 6 and 7.

We first present a visual comparison of a selection of images from the Berkeley dataset in Figs. 5 and 6. By visual inspection, one can see that QS and TPS are the ones that perform more poorly than the other compared approaches. They fail to provide good boundaries of the structures and they do not preserve relevant details. Examples can be seen in (A),(B), (E) and (G) the eyes; (C), (E) and (F) preservation of fine details. LRW offer a better edge adherence to the objects than QS and TPS but also fails to preserve relevant objects, for example (G) the eyes and moustache.

In contrast to those approaches, SLIC and dSLIC performs better than QS, TPS and LRW. One can observe that SLIC and dSLIC readily compete in terms of having better boundary adherence to the structures and grouping correctly majority of the objects. However, in particular, SLIC still produces outputs with more superpixels than necessary in homogeneous parts of the structures; see, for instance, in (A) the fish eye, (B) the nose and (J) the hand. Although dSLIC performs slightly better than SLIC, it still fails to capture fine details.

Among those approaches, SNIC approach display more robustness in terms of grouping structures correctly than the previous approaches. However, like SLIC it also tends to generate more superpixels than needed in uniform regions so that the final outputs do not capture fine details. Examples are (G), (E) and (C) face details; (I) the eyes and head and (F) the leaves.

These major drawbacks are mitigated by our model. Our algorithmic approach shows the best boundary adherence and regularity. This is visible in the leaves in image (F), in which our approach is able to better capture the structure, in (I) on the lips where our approach is able to capture the correct geometry, and (A) the fish eyes, where our approach is the only one that correctly segments the inner part. These positive properties of our approach are prevalent in all images. More examples include preservation of the geometry such as in (J) the hand and (G) face, in which our model is the only one able to correctly segment these fine details.

To further support our visual evaluation, we show a metric-wise comparison in Fig. 7. We start by evaluating the approaches in terms of UE, which is displayed at the left side of this figure. Close observation shows that QS, TPS and TP perform poorly, and in particular, LRW that reported the highest Undersegmentation Error. dSLIC and SLIC show quite low UE, and SNIC ranks the second best. Overall, our approach shows substantial improvement over the compared approaches reporting the lowest UE for all superpixels counts. A similar effect is exhibited in terms of Boundary Recall. TP and TPS perform poorly while the other compared approaches reported better BR. Our approach readily competes with the other compared schemes and the overall BR of our approach was reported to be the best. The same effect is observed in terms of BP-vs-BR, which reflects that our approach overall adheres better to the truth boundaries.

How Is the Computational Performance? Finally, we evaluate our approach vs the SOTA models in terms of CPU performance in seconds. Results are displayed at Figure  8, using the average time across all images and over the range of [80, 2500] superpixels. From this plot, we observe that TP, TPS and LRW require high computational time whilst QS and dSLIC sightly improve in this regard. Finally, SLIC, SNIC and OURS provide more feasible runtimes that are appropriate for a pre-processing task. We remark that our slightly higher computational load is justified by the substantially improved results over SLIC, SNIC with the same number of superpixels.

5 Conclusion

In this work, we proposed a new superpixel approach that builds upon SLIC technique. Our approach incorporates the notion of spectral residual as a proxy for object density and a novel seeding strategy. We demonstrated that our approach seeds clusters advantageously and modify the local search radius. This leads to better segmentation, with a comparable computational load, to other state-of-the-art algorithms.

References

  • [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 34 (11), pp. 2274–2282. Cited by: §1, §1, §1, §2, Figure 5, Figure 6, §4.1, §4.1.
  • [2] R. Achanta and S. Susstrunk (2017) Superpixels and polygons using simple non-iterative clustering. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    pp. 4651–4660. Cited by: §2, Figure 5, Figure 6, §4.1, §4.1, §4.2.
  • [3] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik (2010) Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 898–916. Cited by: §4.1.
  • [4] J. Chen, J. Hou, Y. Ni, and L. Chau (2018) Accurate light field depth estimation with superpixel regularization over partially occluded regions. IEEE Transactions on Image Processing (TIP), pp. 4889–4900. Cited by: §1.
  • [5] D. Comaniciu and P. Meer (2002) Mean shift: a robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis & Machine Intelligence (5), pp. 603–619. Cited by: §2.
  • [6] P. F. Felzenszwalb and D. P. Huttenlocher (2004) Efficient graph-based image segmentation. International Journal of Computer Vision. Cited by: §2.
  • [7] H. Fu, X. Cao, D. Tang, Y. Han, and D. Xu (2014) Regularity preserved superpixels and supervoxels. IEEE Transactions on Multimedia 16 (4), pp. 1165–1175. Cited by: §2.
  • [8] R. Giraud, V. Ta, and N. Papadakis (2017) Superpixel-based color transfer. In IEEE International Conference on Image Processing (ICIP), pp. 700–704. Cited by: §1.
  • [9] X. Hou and L. Zhang (2007) Saliency detection: a spectral residual approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. Cited by: 1st item, §2, §3.1, §3.1, §3.1.
  • [10] A. Humayun, F. Li, and J. M. Rehg (2015) The middle child problem: revisiting parametric min-cut and seeds for object proposals. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1600–1608. Cited by: §1.
  • [11] A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi (2009) Turbopixels: fast superpixels using geometric flows. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). Cited by: §1, §2, Figure 5, Figure 6, §4.1, §4.1, §4.2.
  • [12] G. Li and Y. Yu (2016) Deep contrast learning for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 478–487. Cited by: §2.
  • [13] Z. Li and J. Chen (2015)

    Superpixel segmentation using linear spectral clustering

    .
    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1356–1363. Cited by: §2.
  • [14] F. Liu, C. Shen, and G. Lin (2015) Deep convolutional neural fields for depth estimation from a single image. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5162–5170. Cited by: §1.
  • [15] P. Liu, M. Lyu, I. King, and J. Xu (2019)

    SelFlow: self-supervised learning of optical flow

    .
    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4571–4580. Cited by: §1.
  • [16] S. Lloyd (1982) Least squares quantization in pcm. IEEE Transactions on Information Theory, pp. 129–137. Cited by: §1, §2.
  • [17] G. Maierhofer, D. Heydecker, A. I. Aviles-Rivero, S. M. Alsaleh, and C. Schonlieb (2018) Peekaboo-where are the objects? structure adjusting superpixels. In IEEE International Conference on Image Processing (ICIP), pp. 3693–3697. Cited by: §1, §2, §3.1, Figure 5, Figure 6, §4.1, §4.1, §4.2.
  • [18] M. Menze and A. Geiger (2015) Object scene flow for autonomous vehicles. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3061–3070. Cited by: §1.
  • [19] A. P. Moore, S. J. Prince, J. Warrell, U. Mohammed, and G. Jones (2008) Superpixel lattices. In 2008 IEEE conference on computer vision and pattern recognition, pp. 1–8. Cited by: §2.
  • [20] P. Neubert and P. Protzel (2014) Compact watershed and preemptive slic: on improving trade-offs of superpixel segmentation algorithms. In International Conference on Pattern Recognition, pp. 996–1001. Cited by: §2.
  • [21] J. Papon, A. Abramov, M. Schoeler, and F. Worgotter (2013) Voxel cloud connectivity segmentation-supervoxels for point clouds. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2027–2034. Cited by: §2.
  • [22] X. Ren and J. Malik (2003) Learning a classification model for segmentation. In null, pp. 10. Cited by: §1, §1, §2.
  • [23] P. Sellars, A. Aviles-Rivero, and C. Schönlieb (2019) Superpixel contracted graph-based learning for hyperspectral image classification. arXiv preprint arXiv:1903.06548. Cited by: §1.
  • [24] J. Shen, Y. Du, W. Wang, and X. Li (2014) Lazy random walks for superpixel segmentation. IEEE Transactions on Image Processing, pp. 1451–1462. Cited by: §2, Figure 5, Figure 6, §4.1, §4.1, §4.2.
  • [25] C. Shi and C. Pun (2019)

    Multiscale superpixel-based hyperspectral image classification using recurrent neural networks with stacked autoencoders

    .
    IEEE Transactions on Multimedia (TMM). Cited by: §1.
  • [26] J. Shi and J. Malik (2000) Normalized cuts and image segmentation. Departmental Papers (CIS), pp. 107. Cited by: §2.
  • [27] D. Stutz, A. Hermans, and B. Leibe (2018) Superpixels: an evaluation of the state-of-the-art. Computer Vision and Image Understanding. Cited by: §1, §2.
  • [28] D. Tang, H. Fu, and X. Cao (2012) Topology preserved regular superpixel. In IEEE International Conference on Multimedia and Expo, pp. 765–768. Cited by: §1, §2, Figure 5, Figure 6, §4.1, §4.1, §4.2.
  • [29] N. Tong, H. Lu, X. Ruan, and M. Yang (2015) Salient object detection via bootstrap learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1884–1892. Cited by: §2.
  • [30] A. Vedaldi and S. Soatto (2008) Quick shift and kernel methods for mode seeking. In European conference on computer vision, pp. 705–718. Cited by: §1, §2, Figure 5, Figure 6, §4.1, §4.1, §4.2.
  • [31] L. Vincent and P. Soille (1991) Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Transactions on Pattern Analysis & Machine Intelligence (6), pp. 583–598. Cited by: §2.
  • [32] J. Wang and X. Wang (2012) VCells: simple and efficient superpixels using edge-weighted centroidal voronoi tessellations. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). Cited by: §2.
  • [33] S. Wang, H. Lu, F. Yang, and M. Yang (2011) Superpixel tracking. In International Conference on Computer Vision (CVPR), pp. 1323–1330. Cited by: §1.
  • [34] C. Yang, L. Zhang, H. Lu, X. Ruan, and M. Yang (2013) Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3166–3173. Cited by: §2, §2.
  • [35] D. Yeo, J. Son, B. Han, and J. Hee Han (2017)

    Superpixel-based tracking-by-segmentation using markov chains

    .
    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1812–1821. Cited by: §1.

I. Introduction

This supplemental material extends the results and discussions from the main paper. It is organised as follows.

  • Section II: We further extend the description of the SLIC approach.

  • Section III: In this section, we explicitly define the metrics used for the quantitative analyses performed in the main paper.

II. Simple Linear Iterative Clustering (SLIC): Background

In this section, we extend and further formalise the SLIC approach as our approach builds upon it. SLIC uses Lab colour space as this is a representation of the visible colours which simulates human vision.

Definition 1 (Lab color space)

The Lab color space describes mathematically all perceivable colors in the three dimensions for lightness and and for the color opponents green–red and blue–yellow. The range of coordinates for are to and bounded intervals for and respectively, the bounds on which depend on the convention used.

Given this particular choice of coordinates for our colour space, SLIC chooses the following distance measure: For define

and is a parameter which tunes the importance of spatial as compared to Lab-distance. At the practical level, the value of strongly impacts the shape of the superpixels found.

III. Metrics Definition

For clarification purposes, in this section, we give a explicit definition of the metrics used to evaluate our proposed superpixel approach.

One of the most important properties for any superpixel algorithm is the adhereance to the true boundaries of the image. We will explain the two metrics used in our evaluation, known as boundary recall and under-segmentation error. When testing superpixel performance, we assume that we are given an image, along with a ground truth , representing the true regions of the image.

Boundary Recall measures the proportion of the boundary of the true regions in the ground truth which are close to a boundary in the segmentation. To quantify the notion of being close to a boundary, we recall the following definition.

Given a subset of the edge set, we define the distance , where denotes the norm of the difference, measured in pixels. We then define

Definition 2 (Boundary Recall)

Given a ground truth and a segmentation , we write for the union of the edge boundaries , and similarly write for . We define the boundary recall by

In words, the boundary recall is the proportion of true edges which are close to a superpixel edge.

Oversegmentation Error. Intuitively, this measures the size of all superpixels which spill across boundaries of the ground-truth.

Definition 1

For a ground truth , we fix thresholds Given segmentation of the image, the under-segmentation error is given by

Observe that, since is a partition of the image, we can rewrite

Hence, the undersegmentation error measures how wasteful the coverings of the true regions by the superpixel regions are. We usually take to be a fixed proportion of