Embellished designs on the surface of cultural heritage objects, such as pottery, shell, stone and wood contain important information for archaeologists [Zhou et al.2017]. These designs, if successfully identified and correlated, can be used to build chronologies and track trade networks of a region thousands of years ago. In archeology, most of these designs are found to be curve patterns stamped or carved by their makers. Therefore, it is of great interest to archaeologists to accurately segment the curve structures on the surface of unearthed fragments of cultural heritage objects and identify their underlying designs [Kampel and Sablatnig2007, Halir1999]. Figure 1 shows several unearthed pottery sherds dating to the Woodland period of Southeastern North America. The curve structures on their surfaces reflect a portion of the curve pattern carved into wooden paddles and applied onto hand-built clay vessels designed by southeastern Native Americans around 2000 years ago. There are hundreds of thousands of such fragmented culture heritage objects stored in museums, which calls for more intelligent and automatic tools to explore them.
Clearly, accurately segmenting the curve structures stamped on the surface is the first step to explore these cultural heritage objects. In most cases, these curve structures do not bear distinctive colors and it is very difficult, if not impossible, to segment them from an RGB image of the sherd, e.g., Figure 1(a), taken by traditional cameras. In archeology, 3D scanners are usually utilized to produce a depth image of the object surface – with paddle stamping, the locations of curves exhibit a larger depth than the non-curve portion of surface, as shown in Figure 1(b).
However, three complexities may lead to very weak curve structures on the obtained depth map and make the curve-structure segmentation a very challenging problem. First, the carved paddles used for stamping are usually flat while the object surfaces are usually not. As a result, the paddle typically does not well fit the object surface, which leads to shallow curves at many locations. Second, purposeful smoothing of the stamped surface during vessel manufacture or weathering and erosion after vessel discard can lead to subtle depth differences between the curve and the non-curve portions of the surface. Third, erosion and weathering make the object surface highly rough, which is equivalent to adding random noise to the depth map of the initial object surface. With these three complexities, it is difficult to use a low-level image segmentation algorithm to accurately segment these depth images for curve structures, as shown by an example in Figure 2.
In this paper, we propose a new supervised learning approach to segment such curve structures that were weakly stamped on object surface. The basic idea is that, in most applications, such as exploring cultural heritage objects in archeology, the underlying designs of the curve structures bear certain geometries and patterns. For example, most of the curve structures consist of smooth curve segments. Furthermore, many curves in the structures show good parallelism against each other. These characteristics give the material a visually distinctive style [Smith and Knight2012]. Consideration of these high-level geometry and pattern information may help improve the accuracy and reliability of curve-structure segmentation. While it is difficult to handcraft the features of all relevant curve geometry and pattern in an application, we expect the proposed approach can automatically learn these features from a set of training data with labeled ground truth.
In practice, the curve structures of interest have width, which may vary along the curve and need to be inferred in segmentation. However, it is well known that the curve geometry and pattern are independent of the curve width. Mixing all of them may substantially increase the difficulty of feature learning for segmentation. In this paper, we handle them separately by developing a three-step curve-structure segmentation algorithm. In the first step, a Fully Convolutional Network (FCN) is employed to extract the skeleton of curve structures, and estimate a scale value at each skeleton pixel. This scale value reflects the curve width at the corresponding skeleton pixel. In the second step, we propose a dense prediction network to refine the curve skeletons. In the third step, we develop an adaptive thresholding algorithm to achieve the final segmentation of curve structures with width by considering the estimated scale values.
For the experiments, we collected the depth image of a set of pottery sherds excavated from archaeological sites associated with the Swift Creek paddle-stamped tradition of southeastern North America. Ground-truth curve structure segmentation are manually constructed. We evaluate the proposed method on the collected depth images and compare its performance against several other existing algorithms. We also evaluate the segmentation results in the task of design matching in archeology.
General-purpose image segmentation has been studied for many decades, resulting in many image segmentation algorithms. For example, by considering only low-level pixel intensities, many edge detection [Wang, Kubota, and Siskind2004, Arbelaez et al.2011], region growing/splitting [Tremeau and Borel1997], pixel clustering [Li and Chen2015], and graph-based algorithms [Shi and Malik2000, Wang and Siskind2001, Wang and Siskind2003] have been developed for segmenting an image into multiple regions. By further considering mid-level cues like boundary smoothness, many active-contour and level-set algorithms have been developed to segment foreground objects from background [Chan and Vese2001, Vese and Chan2002]. In principle, these general-purpose image segmentation algorithms can be easily adapted to handle our problem of segmenting curve structures from depth images, by treating depth value as intensity. However, their segmentation performances are usually poor when the depth image is noisy and the desired curve structures are weak. In the experiments, we include several general-purpose segmentation algorithms, such as DoG, LevelSet [Vese and Chan2002], and GrabCut [Rother, Kolmogorov, and Blake2004], as comparison methods.
Deep-learning based algorithms, particularly the CNN-based algorithms, have been recently used for image segmentation, by learning high-level features of the desired segments in a supervised way [Badrinarayanan, Kendall, and Cipolla2015, Zheng et al.2015]
. The most influential one is the Fully Convolutional Network proposed by fcn fcn. It transforms traditional fully connected layers to convolution layers, thus enabling to train and predict a whole image at a time. To improve the localization of object boundaries, chen2016deeplab chen2016deeplab proposed a framework to combine Conditional Random Field (CRF) with FCNs. However, if we directly apply these deep-learning based segmentation algorithms to our problem of segmenting curve structures, it may produce non-curve segments because the CNNs are trained directly on the color/intensity images. In this paper, we will train CNNs on the curve-skeleton images to better learn the curve-geometry and curve-pattern features. More related to our work is Deep Skeleton[Shen et al.2016], which also uses CNNs for skeleton extraction. However, Deep Skeleton is not specifically developed for curve structures and may produce many false positive skeletons. In the experiments, we include Deep Skeleton as a comparison method.
Curve-structure segmentation from RGB or gray-scale images have been studied in many specific applications. For example, lorigo2001curves lorigo2001curves utilized an energy criterion based on intensity and local boundary smoothness to extract blood vessels in medical images. tao2002using tao2002using constructed a statistical shape model to extract sulcal curves on the outer cortex of human brain. zou2012cracktree zou2012cracktree proposed a tree-based algorithm to detect curve-like cracks from pavement images. However, these methods all rely on specific assumptions in respective applications and it is not easy to extend the segmentation algorithm developed for one application to another application.
Using computer vision and machine learning techniques to explore cultural heritage objects has been attracting more interest in recent years. However, most of them are focused on the classification and matching of object fragments. For example, in[Smith et al.2010, Makridis and Daras2012, Rasheed and Nordin2015]
, various archaeological fragments are classified based on color and texture features. In[Zhou et al.2017], an extended Chamfer matching algorithms is developed to identify the design of a pottery sherd by matching the curve structures on the sherd to all the known designs, where the curve structures on the sherds are segmented with manual assistance. In this paper, we focus on accurate segmentation of curve structures on the surface of sherds, which is a fundamental step before the classification and matching.
The proposed method consists of three steps. First, we train an FCN to detect the skeletons of the curve structures in the depth image. This FCN network also estimates a scale value at each detected skeleton pixel to reflect the curve width at this skeleton pixel. Second, we train a dense prediction convolutional network to identify and prune false positive skeleton pixels. Finally, we develop a scale-adaptive thresholding algorithm to recover the curve width and achieve the final segmentation of curve structures.
Step I: Detecting Curve Skeletons using FCN
In this paper skeletons are the center lines of the curve structures and they are of one-pixel width. By ignoring the curve width, the skeletons reflect the geometry and pattern of the curve structures. Therefore, in the first step, we train a FCN to detect the skeletons of the curve structures from an input depth image. Just like image segmentation, skeleton detection can be formulated as a pixel-labeling problem: skeleton pixel has a label and non-skeleton pixel has a label .
We design an FCN, as illustrated in Figure 3, to label skeleton pixels. It follows the encoder-decoder architecture developed in [Long, Shelhamer, and Darrell2015]. Encoders 1 and 2 are small convnets made up of two
convolutional layers, two ReLu layers and onemax-pooling layer. Encoder 3 is a small convnet made up of three convolutional layers, three ReLu layers and one max-pooling layer. After an encoder, the image size will be reduced to 1/4. Therefore, the receptive field sizes of feature maps generated by the three encoders are , , and , respectively. After each encoder, a fully connected layer is employed to match the number of feature maps with the number of labels. In order to generate pixelwise prediction result, the fully connected layers are implemented by convolutional layers. These results are denoted as , and , respectively, as shown in Figure 3. Note that the size of , and are successively downsampled by factors of 2, 4, and 8 from the original image size. The decoders are three deconvolution layers with a kernel size of Xie and Tu2015].
The use of multiple encoders/decoders can extract image features in different levels of details. To make full use of all the extracted features, the decoders are organized in a way of stepwise accumulation when fusing them together.The output skeleton heat map can be computed by
where indicates the upsampling operation performed by the decoders and its associated superscript is the upsampling factor, e.g., indicates an upsampling of map by a factor of 2. With the skeleton heat map , we apply a common image thinning algorithm [Lam, Lee, and Suen1992] to generate the single-pixel width skeleton map.
Inspired by [Shen et al.2016], we can compare the three score maps , and to estimate the scale at each detected skeleton pixel. The scale value at a skeleton pixel reflects the local curve width at this pixel. More specifically, since different encoders correspond to different receptive field sizes, at each pixel the receptive field size of the encoder with the largest score reflects the scale at this pixel. Before we compare the score of different maps, we need to first upsample them to the original image size. This way, the scale at the skeleton pixel can be computed by
where is the upsampled score map of . Later we will use the estimated scale values to help recover the curve width.
Step II: Refining Skeletons using Dense Prediction Convnet
While we expect the FCN trained in Step I can learn curve geometry and pattern features in detecting skeletons, we find that it still detects many false positive skeletons, as shown in Figure 4. In this step, we further train a supervised classifier to identify and prune such false positives by learning more curve features. Specifically, for each skeleton pixel detected in Step I, we take a neighboring window in the original depth image around the pixel as the input and train a dense prediction convnet to determine whether is a true skeleton pixel or a false positive.
On real images, detecting a skeleton with small dislocation to its real position is totally fine and unavoidable – even a manually labeled skeleton may not be perfectly aligned with the real center line of the curve structures. Therefore, our aim is not to directly train a hard classifier to distinguish skeleton pixels and non-skeleton pixels. Instead, we hope to train a soft classifier where a skeleton probability is outputted at each pixel. To achieve this goal, in the training we transform a binary skeleton map to a skeleton probability map by
where is the set of skeleton pixels in the binary skeleton map. Using
as output of the network, the binary classification problem is converted to a regression problem. Accordingly, we need to use a sigmoid function instead of softmax in the last layer of the proposed dense prediction convnet.
In this paper, we propose to use a convnet consisting of three convolutional layers, three max-pooling layers and two fully connected layers. Its specific configuration is summarized in Table 1. For a testing image, let the set of the skeleton pixels detected in Step I be and the skeleton probability map generated by the prediction convnet in this step be , we prune the low-probability () skeleton pixels in to achieve a refined set of skeleton pixels as
Sample results of skeleton map after this step of refinement can be found in Figure 4.
|Convolution||n:128, k:, s:1, p:1|
|Convolution||n:64, k:, s:1, p:1|
|Convolution||n:32, k:, s:1, p:1|
The configuration of network for Step II, where n, k, s, p stand for the number of outputs, kernel size, stride and padding size respectively.
Step III: Curve-Structure Segmentation by Recovering Curve Width
In this step, we recover the width of curve structures from the skeleton map derived in Step II, with the help of the scale values derived in Step I. Note that the width of the curve structures is not a constant and it may vary along the skeleton. Denote the original depth image by and let be the set of refined skeleton pixels detected on after Step II. For each skeleton pixel , we have a scale value derived in Step I. We construct the curve-structure segmentation, in the form of a binary map of the same size as , using the following algorithm 1.
From the steps 3 and 5 of this algorithm, we can see that the curve width at each skeleton pixel is determined by both the scale value at this pixel and the depth values at and around this pixel. This algorithm does not require the detected skeleton to be exactly aligned with the center line of the curves – a small dislocation of the skeletons may not change the final segmentation if the dislocated skeletons are still located inside the underlying curves. Sample results after Step III are shown in Figure 4.
One important application of the segmented curve structures in archeology is the task of design matching. In the later experiments, we will use this task to evaluate the performance of curve-structure segmentation. As shown in Figure 5(c), a design is a full curve pattern of the paddle that are used for stamping the object surface. In the past decades, archaeologists have restored a small number of full designs by manually examining thousands of sherds [Broyles1968, Snow1975]. The goal of design matching is to identify whether the segmented curve structures are originated from a known design. This is a classical partial matching problem and the key component is the definition of a matching score or distance.
In this paper, we use the classical Chamfer matching [Barrow et al.1977, Zhou et al.2017] for this purpose. As shown in Figure 5, we first thin both the segmented curve structures and the considered design into one-pixel wide skeletons and denote them as and , respectively. We then transform to match the design and compute the Chamfer distance
where is the curve pattern after the transform , indicates all the skeleton-pixel coordinates in the transformed partial pattern , and indicates all the skeleton-pixel coordinates in the curve pattern . is the total number of skeleton pixels in the partial pattern . Eq. (5) actually finds the nearest skeleton-pixel coordinate in for each skeleton-pixel coordinate in , records its Euclidean distance and finally averages over all the skeleton-pixel coordinates in . The matching distance between and is then defined by
with covers all possible translations and rotations. The scaling transforms is not considered here because both and have known actual sizes.
In this section, we validate the effectiveness of the proposed method from three perspectives. First, we evaluate the proposed method in terms of the classical metrics of precision, recall and F-measure and compare it against other six comparison methods. Second, we conduct experiment to justify the usefulness of each step in our method. Third, we evaluate the curve-structure segmentation results in the task of design matching.
For this study, we collected the depth images of 1,174 pieces of pottery sherds that are excavated in various archaeological sites located in southeastern North America. We used a linear array 3D laser scanner, NextEngine, to get the point cloud of sherd surfaces with the resolution of 100 points per . Then their depth images are sampled with the same resolution, i.e., each pixel in depth image covers . The average size of the collected depth images is . We have 530 of these depth images with manually labeled ground-truth curve-structure segmentations. Among all 530 images, we randomly pick 250 for training and the remaining 280 for testing.
To train the FCN in Step I, we thin all the ground-truth curve structures to one-pixel width skeletons, using a standard image thinning algorithm [Lam, Lee, and Suen1992]. Data augmentation is employed here to generate sufficient training data. Specifically, we first split the whole image into small blocks with a size of . Then these blocks are rotated, scaled and flipped with the same scheme as in [Shen et al.2016]. Finally, 141,696 blocks are used in FCN training in Step I. As for the network training in Step II, we randomly take 44,906 window images with a size of around the skeleton pixels identified in Step I for training.
For the purpose of better training, the parameters of encoders in the skeleton extraction network are initialized with the pre-trained FCN-8s model [Long, Shelhamer, and Darrell2015]. The parameters of decoder are fixed to perform bilinear interpolation [Xie and Tu2015]. The maximum number of training iterations is set as 20,000, with a mini-batch size of 10. The base learning rate is and decays to after 10,000 iterations. Momentum and weight decay are set to 0.9 and respectively.
Because the dense prediction convnet in Step II is relatively lightweight, we choose to train it from scratch. The maximum number of training iterations is set to 100,000, with a mini-batch size of 10. The base learning rate is , and it decays in an inverse way with the parameter and . Momentum and weight decay are set to be the same as the FCN in Step I.
F-measure based Segmentation Performance
To evaluate the effectiveness of our method of curve-structure segmentation, we select six widely-used segmentation methods for comparison – Difference of Gaussian (DoG), Level Set [Vese and Chan2002], GrabCut [Rother, Kolmogorov, and Blake2004], Fully Convolutional Network (FCN) [Long, Shelhamer, and Darrell2015], Deep Skeleton [Shen et al.2016] and DeepLab [Chen et al.2016]. The experiment is conducted on the 280 testing images as described above, and the evaluation criteria is the traditional F-measure of .
For most of these comparison methods, we keep the default settings in their source codes. But there are several exceptions need to be clarified. Since there is no default setting in DoG, we determine its parameters by trial-and-error. The best performing setting we found is: , , , where and
are the kernel size and standard deviation of Gaussian filters. The filtered images are transformed to curve maps with the threshold 1. In GrabCut, an initialization of the foreground object is required, for which we simply use the DoG result. In Deep Skeleton, we calculated the ground-truth scale maps by applying distance transform on ground-truth segmentation maps. Performance of all methods, averaged over all 280 testing images, are summarized in Table2.
We can see that the proposed method achieves the best F-measure, and outperforms the second best (Deep Skeleton) by 7.7%. Figure 6 shows the segmentation results on three sample images, using all seven methods. In these images, we can observe that DoG actually enhances the difference between adjacent pixels. As a purely low-level method, it may not capture deep and shallow curves simultaneously. GrabCut was initialized by DoG, but its performance becomes even worse. One major reason might be that the data and smoothness energy defined in GrabCut are not sufficiently discriminative to segment the curve structures and non-curve object surface in such a low-contrast image. This is probably the same reason that makes Level Set fail. As expected, the three CNN-based comparison methods, i.e., FCN, Deep Skeleton and DeepLab, normally achieve better performances than the low-level methods. However, their segmentation results usually contain many false positives and the boundaries of the segmented curve structures are quite rough. While the proposed method does not achieve the first place in either precision or recall, it achieves the best performance in final F-measure.
Usefulness of Each Step
Intuitively, the three steps of our method can be replaced by other alternatives or simply ignored. To justify the usefulness of each step, we design three additional experiments, in each of which, we modify or remove one step of the proposed method, and then check its influence to the final segmentation performance.
Modifying Step I: Step I of the proposed method is skeleton extraction. Actually, the FCN we used in this step can be trained to produce curve-structure segmentation directly. However, we choose to extract skeletons first, and then take additional steps to recover the curve width. In this experiment, we make several adjustments in the FCN in Step I to let it output curve structures with width directly. For this purpose, we just use the ground-truth segmentation as the output for training and remove extra upsampling layers in FCN. All the implementation parameters keep unchanged. Sample results of this modified method are shown in Figure 7(b) . We can see that these results contain more false positives and rougher segmentation boundaries. Quantitatively, F-measure of the proposed method decreases from 0.731 to 0.665 if we make this modification to Step I.
Removing Step II: Step II of the proposed method employs a dense prediction convnet as a pixel-wise classifier to refine skeletons extracted by FCN in Step I. To justify its usefulness, we remove this step and recover curve width directly from the skeletons generated in Step I. Sample results are shown in Figure 7(c). We can see that the removal of Step II leads to more false positives. Quantitatively, F-measure of the proposed method decreases from 0.731 to 0.662 if we remove Step II.
Modifying Step III: Simple morphological dilation seems to be a very intuitive approach to recover curve width in Step III. In this experiment, we modify Step III by replacing it with a dilation operation with a radius of 15 pixels, which is the best parameter after we try and test all different values. Sample results are shown in Figure 7(d). While the dilation produces very smooth curve structures, they do not align well with the ground truth. Quantitatively, F-measure of the proposed method decreases by 3.5% if we make this modification to Step III.
In this experiment, we evaluate curve segmentation results in the task of design matching. We take the depth images of 292 sherds with known full designs and in total they come from 29 different designs. The matching distance is the minimal Chamfer distance as defined above.
We use the Cumulative Matching Characteristics (CMC) ranking metric to evaluate the design-matching performance. For each sherd curve-pattern , we match it against all 29 designs by Chamfer matching. We then sort these 29 designs in terms of the matching distance and pick the top matching designs with the smallest matching distances. If the ground-truth design of this sherd is among the identified top designs, we count it as a correct matching under rank . We repeat this for all sherds and calculate the accuracy, i.e., the percentage of the correctly matched sherds, under each rank , . This way, we can draw a CMC curve in terms of rank as shown in Figure 8, which reflects the performance of curve-structure segmentation – The higher the CMC curve, the better the segmentation performance.
Besides the proposed method, we select three other representative comparison segmentation methods for performance evaluation in this experiment. These three comparison methods are DoG, FCN and Deep Skeleton. Figure 8 shows the CMC curves of the proposed method and these three comparison methods in the task of design matching. The proposed method achieves a CMC rank-1 rate of 20% and a CMC rank-15 rate of 78%, which are much better than the other three comparison methods.
In this paper, we put forward a novel and challenging image segmentation problem: weak curve-structure segmentation from noisy depth images, which has important applications in archeology for exploring large collections of fragmented cultural heritage objects. We developed a new three-step supervised-learning based method to address this problem, by first extracting and refining the skeletons of underlying curve structures and then producing the final segmentation by recovering the curve width at each skeleton pixel. In the experiment, we tested the proposed method on a dataset of depth images scanned from unearthed pottery sherds from southeastern North America. We found that the proposed method performs better than several widely used low-level and deep-learning based image segmentation methods in terms of F-measure.
Acknowledgment This work was partly supported by NSF-1658987 and NSFC-61672376.
- [Arbelaez et al.2011] Arbelaez, P.; Maire, M.; Fowlkes, C.; and Malik, J. 2011. Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(5):898–916.
- [Badrinarayanan, Kendall, and Cipolla2015] Badrinarayanan, V.; Kendall, A.; and Cipolla, R. 2015. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561.
[Barrow et al.1977]
Barrow, H. G.; Tenenbaum, J. M.; Bolles, R. C.; and Wolf, H. C.
Parametric correspondence and chamfer matching: Two new techniques
for image matching.
Technical report, SRI International Menlo Park Ca Artificial Intelligence Center.
- [Broyles1968] Broyles, B. J. 1968. Reconstructed designs from swift creek complicated stamped sherds. Southeastern Archaeological Conference Bulletin.
- [Chan and Vese2001] Chan, T. F., and Vese, L. A. 2001. Active contours without edges. IEEE Transactions on Image Processing 10(2):266–277.
- [Chen et al.2016] Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2016. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915.
- [Halir1999] Halir, R. 1999. An automatic estimation of the axis of rotation of fragments of archaeological pottery: A multi-step model-based approach. In Proceedings of the International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media.
- [Kampel and Sablatnig2007] Kampel, M., and Sablatnig, R. 2007. Rule based system for archaeological pottery classification. Pattern Recognition Letters 28(6):740–747.
- [Lam, Lee, and Suen1992] Lam, L.; Lee, S.-W.; and Suen, C. Y. 1992. Thinning methodologies-a comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(9):869–885.
[Li and Chen2015]
Li, Z., and Chen, J.
Superpixel segmentation using linear spectral clustering.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1356–1363.
- [Long, Shelhamer, and Darrell2015] Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.
- [Lorigo et al.2001] Lorigo, L. M.; Faugeras, O. D.; Grimson, W. E. L.; Keriven, R.; Kikinis, R.; Nabavi, A.; and Westin, C.-F. 2001. Curves: Curve evolution for vessel segmentation. Medical Image Analysis 5(3):195–206.
- [Makridis and Daras2012] Makridis, M., and Daras, P. 2012. Automatic classification of archaeological pottery sherds. Journal on Computing and Cultural Heritage 5(4):15.
- [Rasheed and Nordin2015] Rasheed, N. A., and Nordin, M. J. 2015. Archaeological fragments classification based on rgb color and texture features. Journal of Theoretical & Applied Information Technology 76(3).
- [Rother, Kolmogorov, and Blake2004] Rother, C.; Kolmogorov, V.; and Blake, A. 2004. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics 23(3):309–314.
- [Shen et al.2016] Shen, W.; Zhao, K.; Jiang, Y.; Wang, Y.; Zhang, Z.; and Bai, X. 2016. Object skeleton extraction in natural images by fusing scale-associated deep side outputs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 222–230.
- [Shi and Malik2000] Shi, J., and Malik, J. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8):888–905.
- [Smith and Knight2012] Smith, K. Y., and Knight, V. J. 2012. Style in swift creek paddle art. Southeastern Archaeology 31(2):143–156.
- [Smith et al.2010] Smith, P.; Bespalov, D.; Shokoufandeh, A.; and Jeppson, P. 2010. Classification of archaeological ceramic fragments using texture and color descriptors. In Proceedings of the Computer Vision and Pattern Recognition Workshops, 49–54.
- [Snow1975] Snow, F. H. 1975. Swift creek designs and distributions: A south georgia study. Early Georgia 3(2):38–59.
- [Tao, Prince, and Davatzikos2002] Tao, X.; Prince, J. L.; and Davatzikos, C. 2002. Using a statistical shape model to extract sulcal curves on the outer cortex of the human brain. IEEE Transactions on Medical Imaging 21(5):513–524.
- [Tremeau and Borel1997] Tremeau, A., and Borel, N. 1997. A region growing and merging algorithm to color segmentation. Pattern Recognition 30(7):1191–1203.
- [Vese and Chan2002] Vese, L. A., and Chan, T. F. 2002. A multiphase level set framework for image segmentation using the mumford and shah model. International Journal of Computer Vision 50(3):271–293.
- [Wang and Siskind2001] Wang, S., and Siskind, J. M. 2001. Image segmentation with minimum mean cut. In Proceedings of the IEEE International Conference on Computer Vision, 517–524.
- [Wang and Siskind2003] Wang, S., and Siskind, J. M. 2003. Image segmentation with ratio cut. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(6):675–690.
- [Wang, Kubota, and Siskind2004] Wang, S.; Kubota, T.; and Siskind, J. M. 2004. Salient boundary detection using ratio contour. In Advances in Neural Information Processing Systems, 1571–1578.
- [Xie and Tu2015] Xie, S., and Tu, Z. 2015. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, 1395–1403.
[Zheng et al.2015]
Zheng, S.; Jayasumana, S.; Romera-Paredes, B.; Vineet, V.; Su, Z.; Du, D.;
Huang, C.; and Torr, P. H.
Conditional random fields as recurrent neural networks.In Proceedings of the IEEE International Conference on Computer Vision, 1529–1537.
[Zhou et al.2017]
Zhou, J.; Yu, H.; Smith, K.; Wilder, C.; Yu, H.; and Wang, S.
Identifying designs from incomplete, fragmented cultural heritage objects by curve-pattern matching.Journal of Electronic Imaging 26(1):011022–011022.
- [Zou et al.2012] Zou, Q.; Cao, Y.; Li, Q.; Mao, Q.; and Wang, S. 2012. Cracktree: Automatic crack detection from pavement images. Pattern Recognition Letters 33(3):227–238.