1 Introduction
Skeleton is one of the most representative visual properties, which describes objects with compact but informative curves. Such curves constitute a continuous decomposition of object shapes [13]
, providing valuable cues for both object representation and recognition. Object skeletons can be converted into descriptive features and spatial constraints, which enforce human pose estimation
[22], semantic segmentation [20], and object localization [8].Researchers have been exploiting the representative CNNs for skeleton detection and extraction [5, 24, 18, 17] for years. Stateoftheart approaches root in effective multilayer feature fusion, with the motivation that lowlevel features focus on detailed structures while highlevel features are rich in semantics [5]. As a pioneer work, the holisticallynested edge detection (HED) [24] is computed as a pixelwise classification problem, without considering the complementary among multilayer features. Other stateoftheart approaches, , fusing scaleassociated deep sideoutputs (FSDS) [18, 17] and sideoutput residual network (SRN) [5] investigates the multilayer association problem. FSDS requires intensive annotations of the scales for each skeleton point, while SRN struggles to pursuits the complementary between adjacent layers without complete mathematical explanation. The problem of how to principally explore and fuse more representative features remains to be further elaborated.
Through the analysis, it is revealed that HED treats the skeleton detection as a pixelwise classification problem with the sideoutput from convolutional network. Mathematically, this architecture can be equalized with a linear reconstruction model, by treating the convolutional feature maps as linear bases and the convolutional kernel values as weights. Under the guidance of the linear span theory [6], we formalize a linear span framework for object skeleton detection. With this framework, the output spaces of HED could have intersections since it fails to optimize the subspace constrained by each other, Fig. 1. To ease this problem, we design Linear Span Unit (LSU) according to this framework, which will be utilized to modify convolutional network. The obtained network is named as Linear Span Network (LSN), which consists feature linear span, resolution alignment, and subspace linear span. This architecture will increase the independence of convolutional features and the efficiency of feature integration, which is shown as the smaller intersections and the larger union set, Fig. 1. Consequently, the capability of fitting complex groundtruth could be enhanced. By stacking multiple LSUs in a deeptoshallow manner, LSN captures both rich object context and highresolution details to suppress the cluttered backgrounds and reconstruct object skeletons. The contributions of the paper include:

A linear span framework that reveals the essential nature of object skeleton detection problem, and proposes that the potential performance gain could be achieved with both the increased independence of spanning sets and the enlarged spanned output space.

A Linear Span Network (LSN) can evolve toward the optimized architecture for object skeleton detection under the guidance of linear span framework.
2 Related work
Early skeleton extraction methods treat skeleton detection as morphological operations [12, 25, 14, 9, 7, 23, 11]. One hypothesis is that object skeletons are the subsets of lines connecting center points of superpixels [9]. Such line subsets could be explored from superpixels using a sequence of deformable discs to extract the skeleton path [7]. In [23], The consistence and smoothness of skeleton are modeled with spatial filters, , a particle filter, which links local skeleton segments into continuous curves. Recently, learning based methods are utilized for skeleton detection. It is solved with a multiple instance learning approach [21]
, which picks up a true skeleton pixel from a bag of pixels. The structured random forest is employed to capture diversity of skeleton patterns
[20], which can be also modeled with a subspace multiple instance learning method [15].With the rise of deep learning, researchers have recently formulated skeleton detection as imagetomask classification problem by using learned weights to fuse the multilayer convolutional features in an endtoend manner. HED
[24]learns a pixelwise classifier to produce edges, which can be also used for skeleton detection. Fusing scaleassociated deep sideoutputs (FSDS)
[18] learns multiscale skeleton representation given scaleassociated groundtruth. Sideoutput residual network (SRN) [5] leverages the output residual units to fit the errors between the object symmetry/skeleton groundtruth and the sideoutputs of multiple convolutional layers.The problem about how to fuse multilayer convolutional features to generate an output mask, , object skeleton, has been extensively explored. Nevertheless, existing approaches barely investigate the problem about the linear independence of multilayer features, which limits their representative capacity. Our approach targets at exploring this problem from the perspective of linear span theory by feature linear span of multilayer features and subspace linear span of the spanned subspaces.
3 Problem Formulation
3.1 Rethinking HED
Schematic of linear span with a set of dependent vectors (a) and independent vectors (b).
In this paper, we revisit the implementation of HED, and reveal that HED as well as its variations can be all formulated by the linear span theory [6].
HED utilizes fully convolutional network with deep supervision for edge detection, which is one of the typical lowlevel imagetomask task. Denoting the convolutional feature as with maps and the classifier as , HED is computed as a pixelwise classification problem, as
(1) 
where is the feature value of the th pixel on the th convolutional map and is the classified label of the th pixel in the output image .
Not surprisingly, this can be equalized as a linear reconstruction problem, as
(2) 
where is linear reconstruction weight and is the th feature map in .
We treat each sideoutput of HED as a feature vector in the linear spanned subspace , in which is the index of convolutional stages. Then HED forces each subspace to approximate the groundtruth space . We use three convolutional layers as an example, which generate subspaces , , and . Then the relationship between the subspaces and the groundtruth space can be illustrated as lines in a 3dimension space in Fig. 2(a).
As HED does not optimize the subspaces constrained by each other, it fails to explore the complementary of each subspace to make them decorrelated. The reconstructions can be formulated as
(3) 
When , , and are linearly dependent, they only have the capability to reconstruct vectors in a plane. That is to say, when the point is out of the plane, the reconstruction error is hardly eliminated, Fig. 2(a).
Obviously, if , , and are linearly independent, , not in the same plane, Fig. 2(b), the reconstruction could be significantly eased. To achieve this target, we can iteratively formulate the reconstruction as
(4) 
It s observed that is refined with the constraint of . And is optimized in the similar way, which aims for vector decorrelation. The sum of subspaces, , is denoted with the dark blue plane, and is denoted with the light blue sphere, Fig. 2(b).
Now, it is very straightforward to generalize Eq. (4) to
(5) 
One of the variations of HED, , SRN, which can be understand as a special case of Eq. (5) with , has already shown the effectiveness.
3.2 Linear Span View
Based on the discussion of last section, we can now strictly formulate a mathematical framework based on linear span theory [6], which can be utilized to guide the design of Linear Span Network (LSN) toward the optimized architecture.
In linear algebra, linear span is defined as a procedure to construct a linear space by a set of vectors or a set of subspaces.
Definition 1. is a linear space over . The set is a spanning set for if every in can be expressed as a linear combination of , as
(6) 
and .
Theorem 1. Let be vectors in . Then spans if and only if, for the matrix , the linear system is consistent for every in .
Remark 1. According to Theorem 1, if the linear system is consistent for almost every vector in a linear space, the space can be approximated by the linear spanned space. This theorem uncovers the principle of LSN, which pursues a linear system as mentioned above setting up for as many as groundtruth.
Definition 2. A finite set of vectors, which span and are linearly independent, is called a basis for .
Theorem 2. Every linearly independent set of vectors in a finite dimensional linear space can be completed to a basis of .
Theorem 3. Every subspace has a complement in , that is, another subspace such that vector in can be decomposed uniquely as
(7) 
Definition 3. is said to be the sum of its subspaces if every in can be expressed as
(8) 
Remark 2. We call the spanning of feature maps to a subspace as feature linear span, and the sum of subspaces as subspace linear span. From Theorem 2 and Theorem 3, it is declared that the union of the spanning sets of subspaces is the spanning set of the sum of the subspaces. That is to say, in the subspace linear span we can merge the spanning sets of subspaces step by step to construct a larger space.
Theorem 4. Supposing is a finite dimensional linear space, and are two subspaces of such that , and is the intersection of and , , . Then
(9) 
Remark 3. From Theorem 4, the smaller the dimension of the intersection of two subspaces is, the bigger the dimension of the sum of two subspaces is. Then, successively spanning the subspaces from deep to shallow with supervision increases independence of spanning sets and enlarges the sum of subspaces. It enfores the representative capacity of convolutional features and integrates them in a more effective way.
4 Linear Span Network
With the help of the proposed framework, the Linear Span Network (LSN) is designed for the same targets with HED and SRN, , the object skeleton detection problem. Following the linear reconstruction theory, a novel architecture named Linear Span Unit(LSU) has been defined first. Then, LSN is updated from VGG16 [17] with LSU and hints from Remark 13. VGG16 has been chosen for the purpose of fair comparison with HED and SRN. In what follows, the implementation of LSU and LSN are introduced.
4.1 Linear Span Unit
The architecture of Linear Span Unit (LSU) is shown in Fig. 3, where each feature map is regarded as a feature vector. The input feature vectors are unified with a concatenation (concat for short) operation, as
(10) 
where is the th feature vector. In order to compute the linear combination of the feature vectors, a convolution operation with convolutional kernels is employed:
(11) 
where is the convolutional parameter with elements for the th reconstruction output. The LSU will generate feature vectors in the subspace spanned by the input feature vectors. A slice layer is further utilized to separate them for different connections, which is denoted as
(12) 
4.2 Linear Span Network Architecture
The architecture of LSN is shown in Fig. 4, which is consisted of three components, , feature linear span, resolution alignment, and subspace linear span are illustrated. The VGG16 network with 5 convolutional stages [19] is used as the backbone network.
In feature linear span, LSU is used to span the convolutional feature of the last layer of each stage according to Eq. 11. The supervision is added to the output of LSU so that the spanned subspace approximates the groundtruth space, following Remark 1. If only feature linear span is utilized, the LSN is degraded to HED [24]. Nevertheless, the subspaces in HED separately fit the groundtruth space, and thus fail to decorrelate spanning sets among subspaces. According to Remark 2 and 3, we propose to further employ subspace linear span to enlarge the sum of subspaces and deal with the decorrelation problem.
As the resolution of the vectors in different subspaces is with large variation, simple upsampling operation will cause the Mosaic effect, which generates noise in subspace linear span. Without any doubt, the resolution alignment is necessary for LSN. Thus, in Fig. 4, LSUs have been laid between any two adjacent layers with supervision. As a preprocessing component to subspace linear span, it outputs feature vectors with same resolution.
The subspace linear span is also implemented by LSUs, which further concatenates feature vectors from deep to shallow layers and spans the subspaces with Eq. (5). According to Remark 3, a stepbystep strategy is utilized to explore the complementary of subspaces. With the loss layers attached on LSUs, it not only enlarges the sum of subspaces spanned by different convolutional layers, but also decorrelates the union of spanning sets of different subspaces. With this architecture, LSN enforces the representative capacity of convolutional features to fit complex groundtruth.
5 Experiments
5.1 Experimental setting
Datasets: We evaluate the proposed LSN on pubic skeleton datasets including SYMMAX [21], WHSYMMAX [15], SKSMALL [18], SKLARGE [17], and SymPASCAL[5]. We also evaluate LSN to edge detection on the BSDS500 dataset [1] to validate its generality.
SYMMAX is derived from BSDS300 [1], which contains 200/100 training and testing images. It is annotated with local skeleton on both foreground and background. WHSYMMAX is developed for object skeleton detection, but contains only cropped horse images, which are not comprehensive for general object skeleton. SKSMALL involves skeletons about 16 classes of objects with 300/206 training and testing images. Based on SKSMALL, SKLARGE is extended to 746/745 training and testing images. SymPASCAL is derived from the PASCALVOC2011 segmentation dataset [4] which contains 14 object classes with 648/787 images for training and testing.
The BSDS500 [1] dataset is used to evaluate LSN’s performance on edge detection. This dataset is composed of 200 training images, 100 validation images, and 200 testing images. Each image is manually annotated by five persons on average. For training images, we preserve their positive labels annotated by at least three human annotators.
Evaluation protocol:
Precision recall curve (PRcurve) is use to evaluate the performance of the detection methods. With different threshold values, the output skeleton/edge masks are binarized. By comparing the masks with the groundtruth, the precision and recall are computed. For skeleton detection, the Fmeasure is used to evaluate the performance of the different detection approaches, which is achieved with the optimal threshold values over the whole dataset, as
(13) 
To evaluate edge detection performance, we utilize three standard measures [1]: Fmeasures when choosing an optimal scale for the entire dataset (ODS) or per image (OIS), and the average precision (AP).
Hyperparameters: For both skeleton and edge detection, we use VGG16 [19] as the backbone network. During learning we set the minibatch size to 1, the lossweight to 1 for each output layer, the momentum to 0.9, the weight decay to 0.002, and the initial learning rate to 1e6, which decreases one magnitude for every 10,000 iterations.
5.2 LSN Implementation
We evaluate four LSN architectures for subspace linear span and validate the iterative training strategy.
LSN architectures. If there is no subspace linear span, Fig. 4, LSN is simplified to HED [24], which is denoted as LSN_1. The Fmeasure of LSN_1 is 49.53%. When the adjacent two subspaces are spanned, it is denoted as LSN_2, which is the same as SRN [5]. LSN_2 achieve significant performance improvement over HED which has feature linear span but no subspace span. We compare LSNs with different number of subspaces to be spanned, and achieve the best Fmeasure of 66.82%. When the subspace number is increased to 4, the skeleton detection performance drops. The followings explained why LSN_3 is the best choice.
If the subspaces to be spanned are not enough, the complementary of convolutional features from different layers could not be effectively explored. On the contrary, if a LSU fuses feature layers that have largescale resolution difference, it requires to use multiple upsampling operations, which deteriorate the features. Although resolution alignment significantly eases the problem, the number of adjacent feature layers to be fused in LSU remains a practical choice. LSN_3 reported the best performance by fusing a adjacent layer of higher resolution and a adjacent layer of lower resolution.On one hand, the group of subspaces in LSN_3 uses more feature integration. On the other hand, there is not so much information loss after an upsampling operation.
Architecture  Fmeasure(%) 

LSN_1 (HED, feature linear span only)  49.53 
LSN_2 (SRN, feature and 2subspace linear span)  65.88 
LSN_3 (LSN, feature and 3subspace linear span)  66.15 
LSN_4 (LSN, feature and 4subspace linear span)  65.89 
w/o RA  endtoend  iter1  iter2  iter3  
Fmeasure(%)  66.15  66.63  66.82  66.74  66.68 
Training strategy. With three feature layers spanned, LSN needs upsampling the sideoutput feature layers from the deepest to the shallowest ones. We use the supervised upsampling to unify the resolution of feature layers.
During training, the resolution alignment is also achieved by stacking LSUs. We propose a strategy that train the two kinds of linear span,
, feature linear span with resolution alignment and subspace linear span, iteratively. In the first iteration, we tune the LSU parameters for feature linear span and resolution alignment using the pretrained VGG model on ImageNet, as well as update the convolutional parameters. Keeping the LSU parameters for resolution alignment unchanged, we tune LSU parameters for feature linear span and subspace linear span using the new model. In other iteration, the model is finetuned on the snapshot of the previous iteration. With this training strategy, the skeleton detection performance is improved from 66.15% to 66.82%, Table
LABEL:Training_strategy_table. The detection performance changes marginally when more iterations are used. We therefore use the single iteration (iter1) in all experiments.LSU effect. In Fig. 5, we use a giraffe’s skeleton from SKLARGE as an example to compare and analyze the learned feature vectors (bases) by HED [24], SRN [24], and LSN. In Fig. 5(a) and (c), we respectively visualize the feature vectors learned by HED [24] and the proposed LSN. It can be seen in the first column that the HED’s results incorporate more background noise and mosaic effects. This shows that the proposed LSN can better span an output feature space. In Fig. 5(b) and (d), we respectively visualize the subspace vectors learned by SRN [5] and the proposed LSN. It can be seen in the first column that the SRN’s results incorporate more background noises. It requires to depress such noises by using a residual reconstruction procedure. In contrast, the subspace vectors of LSN is much clearer and compacter. This fully demonstrates that LSN can better span the output space and enforce the representative capacity of convolutional features, which will ease the problems of fitting complex outputs with limited convolutional layers.
5.3 Performance and Comparison
Mehods  Fmeasure  Runtime/s 

Lindeberg [11]  0.270  4.05 
Levinshtein [9]  0.243  146.21 
Lee [7]  0.255  609.10 
MIL [21]  0.293  42.40 
HED [24]  0.495  0.05 
SRN [5]  0.655  0.08 
LMSDS [17]  0.649  0.05 
LSN (ours)  0.668  0.09 
Skeleton detection.
The proposed LSN is evaluated and compared with the stateoftheart approaches, and the performance is shown in Fig. 5.3 and Table LABEL:tab:sk_large. The result of SRN [5] is obtained by running authors’ source code on a Tesla K80 GPU, and the other results are provided by [17].
The conventional approaches including Lindeberg [11], Levinshtein [9], and Lee [7], produce the skeleton masks without using any learning strategy. They are time consuming and achieve very low Fmeasure of 27.0%, 24.3%, and 25.5%, respectively. The typical learning approach, , multiple instance learning (MIL) [21], achieves Fmeasure of 29.3%. It extractes pixelwised feature with multiorientation and multiscale, and averagely uses 42.40 seconds to distinguish skeleton pixels from the backgrounds in a single image.
The CNN based approaches achieve huge performance gain compared with the conventional approaches. HED [24] achieves the Fmeasure of 49.5% and uses 0.05 seconds to process an images, while SRN [5] achieves 64.9% and uses 0.08 seconds. The scaleassociated multitask method, LMSDS [17], achieves the performance of 64.9%, which is built on HED with the pixellevel scale annotations. Our proposed LSN reportes the best detection performance of 66.8% with a little more runtime cost compared with HED and SRN.
The results show that feature linear span is efficient for skeleton detection. As discussed above, HED and SRN are two special case of LSN. LSN that used three spanned layers in each span unit is a better choice than the stateofthe art SRN. Some skeleton detection results are shown in Fig. 7. It is illustrated that HED produces lots of noise while the FSDS is not smooth. Comparing SRN with LSN, one can see that LSN rectifies some false positives as shown in column one and column three and reconstruct the dismiss as shown in column six.
WHSYMMAX  SKSMALL  SYMMAX  SymPASCAL  

Levinshtein [9]  0.174  0.217  –  0.134 
Lee [7]  0.223  0.252  –  0.135 
Lindeberg [11]  0.277  0.227  0.360  0.138 
Particle Filter [23]  0.334  0.226  –  0.129 
MIL [21]  0.365  0.392  0.362  0.174 
HED [24]  0.743  0.542  0.427  0.369 
FSDS [18]  0.769  0.623  0.467  0.418 
SRN [5]  0.780  0.632  0.446  0.443 
LSN (ours)  0.797  0.633  0.480  0.425 
The proposed LSN is also evaluated on other four commonly used datasets, including WHSYMMAX [15], SKSMALL [18], SYMMAX [21], and SymPASCAL [5]. The Fmeasure are shown in Table LABEL:tab::four_datasets. Similar with SKLARGE, LSN achieves the best detection performance on WHSYMMAX, SKSMALL, and SYMMAX, with the Fmeasure 79.7%, 63.3% and 48.0%. It achieves 5.4%, 8.1%, and 5.3% performance gain compared with HED, and 1.7%, 0.1%, and 2.4% gain compared with SRN. On SymPASCAL, LSN achieves comparable performance of 42.5% vs. 44.3% with the stateofthe art SRN.
Edge detection. Edge detection task has similar implementation with skeleton that discriminate whether a pixel belongs to an edge. It also can be reconstructed by the convolutional feature maps. In this section, we compare the edge detection result of the proposed LSN with some other stateoftheart methods, such as Canny [2], Sketech Tokens [10], Structured Edge (SE) [3], gPb [1], DeepContour [16], HED [24], and SRN [5], Fig. 8 and Table 5.
In Fig. 8, it is illustrated that the best conventional approach is SE with Fmeasure (ODS) of 0.739 and all the CNN based approaches achieve much better detection performance. HED is one of the baseline deep learning method, which achieved 0.780. The proposed LSN reportes the highest Fmeasure of 0.790, which has a very small gap (0.01) to human performance. The Fmeasure with an optimal scale for the per image (OIS) was 0.806, which was even higher than human performance, Table 5. The good performance of the proposed LSN demonstrates its general applicability to imagetomask tasks.
Mehods  ODS  OIS  AP  FPS 

Canny [2]  0.590  0.620  00578  15 
ST [10]  0.721  0.739  0.768  1 
gPb [1]  0.726  0.760  0.727  1/240 
SE [16]  0.739  0.759  0.792  2.5 
DC [16]  0.757  0.776  0.790  1/30 
HED [24]  0.780  0.797  0.814  2.5 
SRN [5]  0.782  0.800  0.779  2.3 
LSN (ours)  0.790  0.806  0.618  2.0 
Human  0.800  0.800  –  – 
6 Conclusion
Skeleton is one of the most representative visual properties, which describes objects with compact but informative curves. In this paper, the skeleton detection problem is formulated as a linear reconstruction problem. Consequently, a generalized linear span framework for skeleton detection has been presented with formal mathematical definition. We explore the Linear Span Units (LSUs) to learn a CNN based mask reconstruction model. With LSUs we implement three components including feature linear span, resolution alignment, and subspace linear span, and update the Holisticallynested Edge Detection (HED) network to Linear Span Network (LSN). With feature linear span, the ground truth space can be approximated by the linear spanned output space. With subspace linear span, not only the independence among spanning sets of subspaces can be increased, but also the spanned output space can be enlarged. As a result, LSN will have better capability to approximate the ground truth space, , against the cluttered background and scales. Experimental results validate the stateoftheart performance of the proposed LSN, while we provide a principled way to learn more representative convolutional features.
Acknowledgement
This work was partially supported by the National Nature Science Foundation of China under Grant 61671427 and Grant 61771447, and Beijing Municipal Science & Technology Commission under Grant Z181100008918014.
References
 [1] Arbelaez, P., Maire, M., Fowlkes, C.C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)
 [2] Canny, J.F.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)
 [3] Dollár, P., Zitnick, C.L.: Fast edge detection using structured forests. IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1558–1570 (2015)

[4]
Everingham, M., Gool, L.J.V., Williams, C.K.I., Winn, J.M., Zisserman, A.: The pascal visual object classes (VOC) challenge. International Journal of Computer Vision
88(2), 303–338 (2010) 
[5]
Ke, W., Chen, J., Jiao, J., Zhao, G., Ye, Q.: SRN: sideoutput residual network for object symmetry detection in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 302–310 (2017)
 [6] Lax, P.: Linear Algebra and Its Applications, vol. 2. WileyInterscience (2007)
 [7] Lee, T.S.H., Fidler, S., Dickinson, S.J.: Detecting curved symmetric parts using a deformable disc model. In: IEEE International Conference on Computer Vision. pp. 1753–1760 (2013)
 [8] Lee, T.S.H., Fidler, S., Dickinson, S.J.: Learning to combine midlevel cues for object proposal generation. In: IEEE International Conference on Computer Vision. pp. 1680–1688 (2015)
 [9] Levinshtein, A., Sminchisescu, C., Dickinson, S.J.: Multiscale symmetric part detection and grouping. International Journal of Computer Vision 104(2), 117–134 (2013)
 [10] Lim, J.J., Zitnick, C.L., Dollár, P.: Sketch tokens: A learned midlevel representation for contour and object detection. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 3158–3165 (2013)
 [11] Lindeberg, T.: Edge detection and ridge detection with automatic scale selection. International Journal of Computer Vision 30(2), 117–156 (1998)
 [12] Saha, P.K., Borgefors, G., di Baja, G.S.: A survey on skeletonization algorithms and their applications. Pattern Recognition Letters 76, 3–12 (2016)
 [13] Sebastian, T.B., Klein, P.N., Kimia, B.B.: Recognition of shapes by editing their shock graphs. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 550–571 (2004)
 [14] Shen, W., Bai, X., Hu, R., Wang, H., Latecki, L.J.: Skeleton growing and pruning with bending potential ratio. Pattern Recognition 44(2), 196–209 (2011)
 [15] Shen, W., Bai, X., Hu, Z., Zhang, Z.: Multiple instance subspace learning via partial random projection tree for local reflection symmetry in natural images. Pattern Recognition 52, 306–316 (2016)
 [16] Shen, W., Wang, X., Wang, Y., Bai, X., Zhang, Z.: Deepcontour: A deep convolutional feature learned by positivesharing loss for contour detection. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 3982–3991 (2015)
 [17] Shen, W., Zhao, K., Jiang, Y., Wang, Y., Bai, X., Yuille, A.L.: Deepskeleton: Learning multitask scaleassociated deep side outputs for object skeleton extraction in natural images. IEEE Trans. Image Processing 26(11), 5298–5311 (2017)
 [18] Shen, W., Zhao, K., Jiang, Y., Wang, Y., Zhang, Z., Bai, X.: Object skeleton extraction in natural images by fusing scaleassociated deep side outputs. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 222–230 (2016)
 [19] Simonyan, K., Zisserman, A.: Very deep convolutional networks for largescale image recognition. arXiv abs/1409.1556 (2014)
 [20] Teo, C.L., Fermüller, C., Aloimonos, Y.: Detection and segmentation of 2d curved reflection symmetric structures. In: IEEE International Conference on Computer Vision. pp. 1644–1652 (2015)
 [21] Tsogkas, S., Kokkinos, I.: Learningbased symmetry detection in natural images. In: European Conference on Computer Vision (2012)
 [22] Wei, S., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 4724–4732 (2016)
 [23] Widynski, N., Moevus, A., Mignotte, M.: Local symmetry detection in natural images using a particle filtering approach. IEEE Trans. Image Processing 23(12), 5309–5322 (2014)
 [24] Xie, S., Tu, Z.: Holisticallynested edge detection. In: IEEE International Conference on Computer Vision5. pp. 1395–1403 (2015)
 [25] Yu, Z., Bajaj, C.L.: A segmentationfree approach for skeletonization of grayscale images via anisotropic vector diffusion. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition. pp. 415–420 (2004)
Comments
There are no comments yet.