Generalized Shortest Path-based Superpixels for Accurate Segmentation of Spherical Images

04/15/2020 ∙ by Remi Giraud, et al. ∙ 0

Most of existing superpixel methods are designed to segment standard planar images as pre-processing for computer vision pipelines. Nevertheless, the increasing number of applications based on wide angle capture devices, mainly generating 360 spherical images, have enforced the need for dedicated superpixel approaches. In this paper, we introduce a new superpixel method for spherical images called SphSPS (for Spherical Shortest Path-based Superpixels). Our approach respects the spherical geometry and generalizes the notion of shortest path between a pixel and a superpixel center on the 3D spherical acquisition space. We show that the feature information on such path can be efficiently integrated into our clustering framework and jointly improves the respect of object contours and the shape regularity. To relevantly evaluate this last aspect in the spherical space, we also generalize a planar global regularity metric. Finally, the proposed SphSPS method obtains significantly better performances than both planar and spherical recent superpixel approaches on the reference 360 spherical panorama segmentation dataset.



There are no comments yet.


page 2

page 5

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The growing in resolution and quantity of image data has highlighted the need for efficient under representations to reduce the computational load of computer vision pipelines. In this context, superpixels were popularized with [1] to reduce the image domain to irregular regions having approximately the same size and homogeneous colors. Contrary to regular multi-resolution schemes, a result at the superpixel scale can be very close to the optimal one at the pixel scale. Superpixels have been successfully used in many applications such as: semantic segmentation [23, 26]

, optical flow estimation

[18] or style transfer [15]. The main issue to deal with is the irregularity between all regions that may prevent from using the standard neighborhood-based tools. Nevertheless, this issue has been addressed in graph-based approaches [11], using neighborhood structure [8]

, or within deep learning frameworks


At the same time, the use of new acquisition devices capturing wide angles, such as fish eyes, generally covering a 360 field of view has become more and more popular. These devices offer a global capture of the environment, particularly interesting for applications such as autonomous driving. With a depth-aware system, the intensity can be projected on a 3D point cloud. Otherwise, the image sphere is generally projected on a discrete 2D plane to generate an equirectangular image inducing distorsions [34]. In this context, several works, e.g., [5, 21] have used standard planar superpixels although they do not consider the geometry distorsions in the equirectangular image, that may limit the segmentation accuracy and their interpretation on the spherical acquisition space [33].

Many superpixel approaches have been proposed over the years, most exclusively to segment standard planar images. These methods use watershed [17], region growing [14], eikonal-based [4], graph-based energy [16], or even coarse-to-fine algorithms [31]. A significant breakthrough was obtained with the SLIC method [1], locally adapting a -means algorithm on a trade-off between distances in the spatial and CIELab color space to generate superpixels. The method has few parameters and a low processing time, but is limited to adjust to different image content and to accurately capture object borders. Many improvements of SLIC have been proposed using boundary constraint [32], advanced feature space [6], non-iterative clustering [2], a shortest path approach [10], or even deep learning processes [24] although these last methods present the usual limitations, in terms of resources, training time, large dataset needed, and applicability to other images.

For spherical images, The unsupervised segmentation approach of [7] has been extended in [30], but generates very irregular regions, not considered as superpixels. More recently, the SLIC method was extended to produce spherically regular superpixels [33]. Pixels are projected on the unit sphere for computing the spatial constraints and produce regular superpixels in the spherical space. Besides the display interest, the respect of the acquisition space geometry enables to more accurately segment the image objects [33]. Nevertheless, this approach comes with the same limitations as SLIC, i.e., limited adaptability to different contexts with severe non robustness to textures or noise due to the use of a standard color feature space, and no explicit integration of contour information. These limitations are addressed in [10], for which authors obtain significantly higher accuracy for standard planar image segmentation by considering the color and contour features along the shortest path between the pixel and the superpixel.


In this paper, we address the limitations of the spherical approach of [33], by proposing in Section II a new superpixel method called SphSPS (Spherical Shortest Path-based Superpixels). SphSPS is based on the same spherical -means approach of [33] but exploites more advanced features [6] and generalizes the notion of shortest path [10], to the acquisition space, here the spherical one. To this end, a dedicated fast shortest path algorithm is defined to integrate the information of this large number of pixels into the method.

SphSPS generates in very limited processing time accurate and regular spherical superpixels (see Figure 1). To relevantly evaluate the regularity aspect in the spherical space, we also propose a generalization of the global regularity measure [9] (Section III). SphSPS obtains higher segmentation performance than the state-of-the-art methods on the reference 360 spherical panorama segmentation dataset [25] (Section IV).

Standard planar superpixels using [6]
Spherical superpixels using the proposed SphSPS method
Fig. 1: Example of superpixel segmentation with a planar [6] and the proposed spherical SphSPS method on a 360 panorama image. SphSPS provides accurate superpixels that are regular in the spherical acquisition space (red square) and connected at horizontal boundaries (blue ellipse).

Ii Spherical Shortest Path-based Superpixels

To introduce SphSPS, we first present the -means method [1] (Section II-A) and its spherical adaptation [33] (Section II-B

). Then, we present the feature extraction method on a planar shortest path

[10] (Section II-C1) and our generalization to the spherical space (Sections II-C2 and II-C3).

Ii-a Planar K-means Iterative Clustering

SphSPS is based on the SLIC algorithm [1] using an iteratively constrained -means clustering of pixels. Superpixels are first initialized over the image as blocks of size , described by the average CIELab colors and barycenter position of pixels in . The clustering for each pixel relies on a color , and a spatial distance . At each iteration, each superpixel is compared to all pixels , of color at position , within a area around its barycenter . A pixel is associated to the superpixel minimizing the distance defined as:


with , the trade-off parameter setting the shape regularity. Finally, a post-processing step ensures the region connectivity.

Ii-B Spherical Geometry

In the spherical acquisition space, vertical and horizontal coordinates are respectively projected to the meridians and circles of latitude, so the spherical image has a width twice superior to its height. SphSPS is based on the same adaptation of the planar -means method to the spherical geometry as [33], that requires three steps. The first one is the initialization of the superpixels. To spread the barycenters along the sphere, we also use the Hammersley sampling [27]. The second step is the search area that must consider the proximity of pixels in the spherical space. For instance, superpixels on the image top and bottom have larger search areas. This area , is defined for each superpixel of barycenter as:


with the polar angle corresponding to the -th row for an image of height and width , and the average superpixel size . The 360 geometry aspect must also be handled to horizontally connect the pixels. This is done with a mirror effect when the search region falls outside the image boundaries [33]. The third aspect is the computation of the spatial distance, that must also be done in the spherical space. For each image pixel the projection on the 3D acquisition space is computed as:


Note that , when computed from , so we map on the image domain with , if .

The most straightforward 3D spatial distance is the Euclidean one . SphSPS uses the spherical and computationally costless cosine dissimilarity distance proposed in [33] as . Note that with adjusted parameter (1), both distances can achieve almost similar performances for [33] (see Section IV).

Ii-C Generalized Shortest Path Method

Ii-C1 Feature extraction on a shortest path

In [10], color and contour information of pixels on the planar shortest path between a pixel and a superpixel are used to improve segmentation accuracy and regularity. SphSPS also integrates these features and has the same clustering distance than [10]. Nevertheless, in the following, the shortest path differs since we compute it in the spherical space.

First, to relevantly increase the regularity and prevent non convex shapes to appear, the color distance of the pixels on the path is added to the color distance such that:


with a trade-off parameter usually set to .

The contour information can also be considered to increase the respect of objects borders using a contour map , with values between 0 and 1. A contour term is defined as:


with the parameter penalizing the crossing of a contour.

The final clustering distance of SphSPS is defined as:


with the spherical spatial distance using the cosine dissimilarity as [33], and the proposed spherical shortest path computed as follows.

Ii-C2 Generalized shortest path

In Figure 2, we compare shortest paths in the planar space, as in [10], and in the spherical one as in SphSPS. With planar images, since no distorsions are introduced between the acquisition and the image space (), they are considered equivalent. Hence, the shortest path reduces to a linear path and can be easily computed with a discrete algorithm [3]. Nevertheless, in general, the shortest path should be computed in the acquisition space, than can be spherical or even circular using fisheyes with different capture angles. Hence, the generalized formulation of the shortest path problem computes it in the acquisition space () and projects it back to the planar image space:

Fig. 2: Examples of planar (dotted lines) and spherical shortest path (full lines) between points in the 2D image space (left) and 3D acquisition space (right). The spherical path follows the shortest geodesic path on the sphere.

Ii-C3 Shortest path in the spherical space

The spherical shortest path consists in following the geodesic along the sphere [12], lying on a great circle (in orange color in Figures 2 and 3), containing the two points and the sphere center. Tangential methods to extract way-points on the great circle have been formalized for instance in [13]. Nevertheless, such theoretical approaches use many trigonometric computations that impact the performance. In the following we propose a simpler reformulation of the spherical geodesic path problem.

Fast geodesic path implementation

For each comparison of a pixel at to a superpixel of barycenter , we propose to first compute an orthogonal coordinate system within their great circle. To build such system, we perform an orthogonalization process to get the position

, creating an orthogonal vector to

within the great circle such as:


with the scalar product already computed for the spatial distance . Then, the angle between the two points is simply obtained with . Finally, the geodesic path is defined within , by starting from the pixel position, and linearly increasing the angle shift from to , to reach the superpixel barycenter such as:


with , intermediate angles to linearly sample points between the two positions. The geodesic path is finally projected in the planar space (3) to get (7). By this way, we obtain the shortest spherical path coordinates with simple calculations, dividing the processing time by a factor 2 compared to tangential approaches. An example of spherical shortest path on a great circle with the the computation of the corresponding coordinate system is illustrated in Figure 3.

Fig. 3: Computation of the spherical shortest path. The orthogonal coordinate system is computed from projection of on (8). The angle between the positions is then is used to sample 3D points on the path (9).


First, for each superpixel, we can store the color distance computed to each tested pixel, reducing the processing time by . Then, contrary to the planar linear path algorithm [3] we can exploit path redundancy. If the path of a pixel to a superpixel crosses a previously computed path to the same superpixel, the rest of the path should be the same since they lie on the same great circle. So we can also store the average color and contour information on the path for each crossed pixel. This is done efficiently using recursive implementation. By this way, for many pixels we are able to directly access the large quantity of information contained in the shortest path, again reducing the processing time by .

Iii Generalized Global Regularity Measure

Superpixels tend to optimize a color and spatial trade-off, so metrics should mainly evaluate object segmentation and regularity performances. This last aspect has been sparsely evaluated although most methods have a regularity parameter that may significantly impact superpixel-based pipelines. Moreover, the standard compactness metric [22], which is the only one extended to the spherical space [33] was proven very limited [9]. In this section, we propose a new way to relevantly evaluate the regularity in the acquisition space.

Iii-a Limitation of the Compactness Measure

In [33], the compactness measure C [22] is extended to the spherical case. The regularity of a segmentation is only seen as a notion of circularity, computed as:


with = the spherical isoperimetric quotient [20]. Hence, each superpixel is independently compared to a circular shape, such that for instance, ellipses can have higher C measures than squares. In [9], this metric has been proven highly sensitive to boundary noise and inconsistent with the superpixel size. Moreover, in [33] it even fails to differentiate spherical and planar-based methods.

Fig. 4: Illustration of the projection process of the proposed Generalized Global Regularity (G-GR) metric (12). A superpixel shape is projected in the acquisition space (), then on a two dimensional one using a PCA analysis (), then downsampled to generate a 2D matrix (), allowing for instance to compute a convex hull to measure its regularity.

Iii-B Generalized Global Regularity Metric

Iii-B1 Global regularity metric

In [9], a global regularity metric (GR) is introduced, to address the issues of the compactness. First, the Shape Regularity Criteria (SRC) is defined to robustly evaluate the convexity, the contour smoothness, and the 2D balanced repartition of each superpixel. Convexity and smoothness properties are computed with respect to the discrete convex hull containing the shape.

As for the compactness C (10), SRC is independently computed for each superpixel, so [9] also introduces a Smooth Matching Factor (SMF) to evaluate the consistency of superpixel shapes. Each superpixel is compared, after registration on its barycenter, to the average superpixel shape, created from the superposition of all registered superpixels.

Finally, the notion of regularity is defined by the GR (Global Regularity) metric combining these two metrics such that:


Iii-B2 Generalization in the acquisition space

Ideally the regularity should be evaluated in the acquisition space. In our context, in the spherical acquisition space gives , a set of 3D positions on the unit sphere (3). GR being based on the computation of convex hull, and barycenter registration, it cannot be directly applied to such point clouds in .

To generalize the metric, we propose to simply project the 3D points of on a discrete 2D plan, and then apply the initial GR. The whole process is illustrated in Figure 4. To do so, we first project a superpixel in the discrete image space to its acquisition one, here to get a spherical point cloud

. Then, we apply a Principal Component Analysis (PCA) on

, and project the points on its two most significant eigenvectors to reduce to a 2D point cloud

. Finally, a downsampling is performed to obtain a discrete 2D shape . By this way, each superpixel shape has a relevant discrete projection in the acquisition space. The proposed Generalized Global Regularity (G-GR) metric is defined as:


With the proposed G-GR metric, a gap is now visible such that no planar methods have higher regularity than spherical ones for a given number of superpixels (Section IV-C).

Iv Results

Iv-a Validation Framework

Iv-A1 Dataset

We consider the Panorama Segmentation Dataset (PSD) [25], containing 75 360 equirectangular images of pixels, having between 115 and 1085 segmented objects with an average size of 1334 pixels. These images are taken from the standard spherical dataset SUN360 [28], and accurate ground-truth segmentations are provided by [25].

Iv-A2 Metrics

To relevantly evaluate SphSPS performances and compare to state-of-the-art methods, we use the superpixel metrics recommended in [9], for several superpixel numbers. The main aspects to evaluate are the object segmentation and spatial regularity performances, which is robustly evaluated in the acquisition space with the proposed G-GR metric (12).

For the segmentation aspect, the standard measure is the Achievable Segmentation Accuracy (ASA) [16], highly correlated to the Undersegmentation Error [19] as shown in [9]. The ASA measures the overlap of a superpixel segmentation with the ground truth objects, denoted , such as:


The Boundary-Recall (BR) is the commonly employed metric to evaluate the detection of the ground truth contours by the boundaries of the superpixels such that:


with a distance threshold set to pixels [9], and when is true and otherwise. To prevent methods generating superpixels with fuzzy borders to get high performances [9], BR results are compared to the Contour Density (CD), i.e., the number of pixels of superpixel borders.

The standard Precision-Recall curves can also be represented to illustrate the overall object contour detection performances. These are computed on a contour probability map

generated by averaging the superpixel borders obtained at different scales . This map is thresholded by several intensities to get a binary contour map. For each threshold, the Precision (PR), the percentage of accurate detection among the superpixel borders, is computed with the BR measure. For all PR curves, to synthesize the contour detection performance, we also report the maximum on all thresholds of the F-measure defined as:


Iv-A3 Parameter settings

SphSPS was implemented with MATLAB using C-MEX code, on a standard Linux computer with 12 cores at 2.6 GHz with 64GB of RAM. Contrary to [33], using the 3 average color features of the CIELab space, we use the 6 CIELab dimension space of [6], also including the features of neighboring pixels [10]. In the shortest path, pixels are considered (9). The number of iterations is set to 5, and the parameter (4), setting the trade-off between the central pixel and the ones on the shortest path, is set to as in [10]. When used, the contour prior is computed from [29] and set to (5). Finally, the parameter (6) is empirically set to to provide a visually satisfying trade-off between the respect of object contours and spatial regularity.

Fig. 5: Impact of the SphSPS distance parameters. The contributions enable to significantly improve the accuracy and regularity performances.
(a) Initial image (b) 3-Lab, ==
6-Lab== 6-Lab == 6-Lab= =
Fig. 6: Visual impact of SphSPS parameters. Each contribution relevantly increases the regularity and integrates the contour prior information.

Iv-B Impact of Contributions

In this section, we show the impact of contributions within SphSPS. We report for different distance settings the contour detection PR/BR curves, with the maximum F-measure (15), and the regularity G-GR (12) curves in Figure 5, and a zoom on a segmentation example in Figure 6. With a 3 feature dimension space, SphSPS reduces to the spherical SLIC algorithm [33]. With the 6 dimension space, SphSPS uses the CIELab features of [6], and the neighboring pixels information as in [10], with the color distance (4), and , the contour information on the shortest path (5).

We demonstrate that each contribution enables to improve the segmentation performances. We can especially observe that the color distance on the shortest path, that strengthens the superpixel convexity and homogeneity, indeed provides much more regular superpixels while also increasing the accuracy.

Iv-C Comparison with the State-of-the-Art Methods

We compare the performances of the proposed SphSPS approach to the ones of the state-of-the-art methods. We consider the planar methods SLIC [1], LSC [6], SNIC [2] and SCALP [10], and the spherical approach SphSLIC [33] in 2 different settings, i.e., considering the Euclidean (SphSLIC-Euc) and the Cosine (SphSLIC-Cos) distances (see Section II-B). To ensure fair comparison, planar and SphSLIC-Euc methods are used with their default settings, since they provide a good trade-off between accuracy and regularity. Note that for the SphSLIC-Cos method [33], results are reported for the regularity setting optimizing the segmentation accuracy, since low performances were obtained with default settings.

In Figure 7, we report the contour detection results measured by PR/BR curves with F-measure (15), and BR/CD (14), the segmentation of objects with ASA (13), and regularity with the proposed G-GR metric (12), obtained for several numbers of superpixels. SphSPS overall obtains the best segmentation results, with for instance the higher F-measure (), and significantly outperforms the other spherical method SphSLIC, in both distance modes, while producing very regular superpixels. Note that even without contour prior (), i.e., only using color information on the shortest path, SphSPS still significantly outperforms the other state-of-the-art methods. We also observe that by using a linear path approach, SCALP [10] degrades the segmentation accuracy of LSC [6]. This result highlights the need for considering our spherical shortest path instead of the linear one.

The regularity measured with the proposed G-GR (12) appears to be very relevant and able to differentiate planar and spherical methods. It evaluates the convexity and contour smoothness of each superpixel along with their consistency, while C (10) is only based on a non robust and independent circularity assumption. Hence, with G-GR, the regularity in the spherical space is accurately measured such that no planar methods have higher regularity than spherical ones for a given number of superpixels, contrary to C [33].

In Figure 8, we show segmentation examples of SphSPS compared to the state-of-the-art methods, on 360 equirectangular images and projected on the unit sphere. SphSPS produces regular superpixels in the spherical space and accurately captures the object contours compared to the other methods.

Finally, in terms of processing time, the relevance of our features enables SphSPS to fastly converge in a low number of iterations. For instance, only using the 6 dimensional feature space [6], SphSPS generates superpixels in s per image of size pixels and already obtains higher accuracy () than the state-of-the-art methods (see Figure 5). With the significant optimizations proposed in Section II-C3, SphSPS can use the information on the shortest path to obtain significantly higher accuracy in only s, i.e., faster than existing spherical approaches [33]. Moreover, with basic multi-threading, we easily reduce the processing time of our implementation to s to further facilitate the use of SphSPS111Available code at:

Fig. 7: Quantitative comparison on PR/BR, BR/CD, ASA and G-GR of the proposed SphSPS method to the state-of-the-art ones on the PSD [33].

LSC [6]

SphSLIC-Euc [33]


SNIC [2]

SphSLIC-Cos [33]


SCALP [10]



LSC [6]

SphSLIC-Euc [33]


SNIC [2]

SphSLIC-Cos [33]


SCALP [10]


Fig. 8: Visual comparison between SphSPS and the best planar (left) and spherical (right) state-of-the-art methods on PSD images, for two superpixel numbers (top-left) and (bottom right). The compared methods may generate inaccurate superpixels, while SphSPS produces regular spherical superpixels with smooth boundaries that adhere well to the image contours.

V Conclusion

In this work, we generalize the shortest path approach between a pixel and a superpixel barycenter [9] to the case of spherical images. We show that the complexity resulting of the large number of pixels to process can be extremely reduced using the path redundancy on the 3D sphere. Color features on this path enable to generate both very accurate and regular superpixels. Moreover, SphSPS can consider a contour prior information to further improve its performances.

To ensure a relevant evaluation of regularity, we introduce a generalized metric measuring the spatial convexity and consistency in the 3D spherical space. While providing regular results in the acquisition space, SphSPS significantly outperforms both planar and spherical state-of-the-art methods.

Accuracy and regularity in the acquisition space are crucial for relevant display and for computer vision pre-processing. Future works will extend our method to spherical videos and other acquisition spaces, e.g., circular or polarimetric.


  • [1] R. Achanta, A. Shaji, and K. Smith et al. (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, pp. 2274–2282. Cited by: §I, §I, §II-A, §II, §IV-C.
  • [2] R. Achanta and S. Süsstrunk (2017) Superpixels and polygons using simple non-iterative clustering. In

    IEEE Conference on Computer Vision and Pattern Recognition

    pp. 4895–4904. Cited by: §I, Fig. 8, §IV-C.
  • [3] J. E. Bresenham (1965) Algorithm for computer control of a digital plotter. IBM Systems Journal 4 (1), pp. 25–30. Cited by: §II-C2, §II-C.
  • [4] P. Buyssens, M. Toutain, A. Elmoataz, and O. Lézoray (2014) Eikonal-based vertices growing and iterative seeding for efficient graph-based segmentation. In Proc. of IEEE International Conference on Image Processing, pp. 4368–4372. Cited by: §I.
  • [5] R. Cabral and Y. Furukawa (2014) Piecewise planar and compact floorplan reconstruction from images. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 628–635. Cited by: §I.
  • [6] J. Chen, Z. Li, and B. Huang (2017)

    Linear spectral clustering superpixel

    IEEE Transactions on Image Processing 26, pp. 3317–3330. Cited by: Fig. 1, §I, §I, Fig. 8, §IV-A3, §IV-B, §IV-C, §IV-C, §IV-C.
  • [7] P. F. Felzenszwalb and D. P. Huttenlocher (2004) Efficient graph-based image segmentation. International Journal of Computer Vision 59 (2), pp. 167–181. Cited by: §I.
  • [8] R. Giraud, V. Ta, A. Bugeau, P. Coupé, and N. Papadakis (2017) SuperPatchMatch: an algorithm for robust correspondences using superpixel patches. IEEE Transactions on Image Processing 26 (8), pp. 4068–4078. Cited by: §I.
  • [9] R. Giraud, V. Ta, and N. Papadakis (2017) Evaluation framework of superpixel methods with a global regularity measure. Journal of Electronic Imaging 26 (6). Cited by: §I, §III-A, §III-B1, §III-B1, §III, §IV-A2, §IV-A2, §IV-A2, §V.
  • [10] R. Giraud, V. Ta, and N. Papadakis (2018) Robust superpixels using color and contour features along linear path. Computer Vision and Image Understanding 170, pp. 1–13. Cited by: §I, §I, §I, §II-C1, §II-C2, §II, Fig. 8, §IV-A3, §IV-B, §IV-C, §IV-C.
  • [11] S. Gould, J. Zhao, X. He, and Y. Zhang (2014) Superpixel graph label transfer with learned distance metric. In European Conference on Computer Vision, pp. 632–647. Cited by: §I.
  • [12] M. Gromov et al. (1983) Filling riemannian manifolds. Journal of Differential Geometry 18 (1), pp. 1–147. Cited by: §II-C3.
  • [13] C. F. Karney (2013) Algorithms for geodesics. Journal of Geodesy 87 (1), pp. 43–55. Cited by: §II-C3.
  • [14] A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi (2009) TurboPixels: fast superpixels using geometric flows. IEEE Transactions on Pattern Analysis and Machine Intelligence 31 (12), pp. 2290–2297. Cited by: §I.
  • [15] J. Liu, W. Yang, X. Sun, and W. Zeng (2017) Photo stylistic brush: robust style transfer via superpixel-based bipartite graph. IEEE Trans. on Multimedia 20 (7), pp. 1724–1737. Cited by: §I.
  • [16] M. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa (2011) Entropy rate superpixel segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2097–2104. Cited by: §I, §IV-A2.
  • [17] V. Machairas, M. Faessel, D. Cárdenas-Peña, T. Chabardes, T. Walter, and E. Decencière (2015) Waterpixels. IEEE Transactions on Image Processing 24 (11), pp. 3707–3716. Cited by: §I.
  • [18] M. Menze and A. Geiger (2015) Object scene flow for autonomous vehicles. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3061–3070. Cited by: §I.
  • [19] P. Neubert and P. Protzel (2012) Superpixel benchmark and comparison. In Forum Bildverarbeitung, pp. 1–12. Cited by: §IV-A2.
  • [20] R. Osserman et al. (1978) The isoperimetric inequality. Bulletin of the American Mathematical Society 84 (6), pp. 1182–1238. Cited by: §III-A.
  • [21] K. Sakurada and T. Okatani (2015) Change detection from a street image pair using cnn features and superpixel segmentation.. In British Machine Vision Conference, pp. 61–1. Cited by: §I.
  • [22] A. Schick, M. Fischer, and R. Stiefelhagen (2012) Measuring and evaluating the compactness of superpixels. In International Conference on Pattern Recognition, pp. 930–934. Cited by: §III-A, §III.
  • [23] J. Tighe and S. Lazebnik (2010) SuperParsing: scalable nonparametric image parsing with superpixels. In European Conference on Computer Vision, pp. 352–365. Cited by: §I.
  • [24] W. Tu, M. Liu, V. Jampani, D. Sun, S. Chien, M. Yang, and K. Jan (2018) Learning superpixels with segmentation-aware affinity loss. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §I, §I.
  • [25] L. Wan, X. Xu, Q. Zhao, and W. Feng (2018) Spherical superpixels: benchmark and evaluation. In Asian Conference on Computer Vision, pp. 703–717. Cited by: §I, §IV-A1.
  • [26] H. Wang and P. A. Yushkevich (2013) Multi-atlas segmentation without registration: a supervoxel-based approach. International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 535–542. Cited by: §I.
  • [27] T. Wong, W. Luk, and P. Heng (1997) Sampling with hammersley and halton points. Journal of Graphics Tools 2 (2), pp. 9–24. Cited by: §II-B.
  • [28] J. Xiao, K. A. Ehinger, A. Oliva, and A. Torralba (2012) Recognizing scene viewpoint using panoramic place representation. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2695–2702. Cited by: §IV-A1.
  • [29] S. Xie and Z. Tu (2015) Holistically-nested edge detection. In IEEE International Conference on Computer Vision, pp. 1395–1403. Cited by: §IV-A3.
  • [30] H. Yang and H. Zhang (2016) Efficient 3d room shape recovery from a single panorama. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5422–5430. Cited by: §I.
  • [31] J. Yao, M. Boben, S. Fidler, and R. Urtasun (2015) Real-time coarse-to-fine topologically preserving segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 2947–2955. Cited by: §I.
  • [32] Y. Zhang, X. Li, X. Gao, and C. Zhang (2016) A simple algorithm of superpixel segmentation with boundary constraint. IEEE Transactions on Circuits and Systems for Video Technology (99). Cited by: §I.
  • [33] Q. Zhao, F. Dai, Y. Ma, L. Wan, J. Zhang, and Y. Zhang (2018) Spherical superpixel segmentation. IEEE Trans. on Multimedia 20 (6), pp. 1406–1417. Cited by: §I, §I, §I, §II-B, §II-B, §II-B, §II-C1, §II, §III-A, §III-A, §III, Fig. 7, Fig. 8, §IV-A3, §IV-B, §IV-C, §IV-C, §IV-C.
  • [34] D. Zorin and A. H. Barr (1995) Correction of geometric perceptual distortions in pictures. In International Conf. on Computer Graphics and Interactive Techniques, pp. 257–264. Cited by: §I.