The goal of object segmentation is to produce a pixel level segmentation of different object categories. It is challenging as the objects may appear in various backgrounds and in different visual conditions. CRFs  model the conditional distribution of labels given observations, and represents the state-of-the-art in image/object segmentation [34, 32, 10, 24, 27, 15]. The max-margin principle has also been applied to predict structured outputs, including SSVMs , and max-margin Markov networks 
. These three methods share similarities when viewed as optimization problems using different loss functions. Szummeret al.  proposed to learn linear coefficients of CRFs potentials using SSVMs and graph cuts. To date, most of these methods assume a pre-defined parametric model for the potential functions, and typically only the linear coefficients of the parametric model are learned. This can greatly limit the flexibility of the model capability of CRFs, and thus calls for effective methods to incorporate nonlinear nonparametric models for learning the potential functions in CRFs.
As similar in standard support vector machines (SVMs), nonlinearity can be achieved by introducing nonlinear kernels for SSVMs. However, the time complexity of nonlinear SVMs is roughly with being the number of training data examples. This time complexity is problematic for SSVMs, where the number of constraints grows exponentially in the description length of the label
. Moreover, nonlinear functions can significantly slow down the test time in most cases. Because of these reasons, currently most SSVMs applications use linear kernels (or linear parametric potential functions in CRFs), despite the fact that nonlinear functions usually deliver more promising prediction accuracy. In this work, we address this issue by combining CRFs with nonparametric decision trees. Both CRFs and decision trees have gained tremendous success in computer vision. Decision trees are capable of modelling complex relations and generalize well on test data. Unlike kernel methods, decision trees are fast to evaluate and can be used to select informative features.
In this work, we propose to use ensembles of decision trees to map the image content to both the unary terms and the pairwise interaction values in CRFs. The proposed method is termed as CRFTree. Specifically, we formulate both the unary and pairwise potentials as nonparametric forests—ensembles of decision trees, and learn the ensemble parameters and the trees in a single optimization framework. In this way, the nonlinearity is easily introduced into CRFs learning without confronting the kernel dilemma. Furthermore, we learn class-wise decision trees for each object. Due to the rich structure and flexibility of decision trees, our approach is powerful in modelling complex data likelihoods and label relationships. The resulting optimization problem is very challenging in the sense that it can involve exponentially or even infinitely many variables and constraints. We summarize our main contributions as follows.
We formulate the unary and pairwise potentials as ensembles of decision trees, and show how to jointly learn the ensemble parameters and the trees as a unified optimization problem within the large-margin framework. In this fashion, we achieve nonlinear potential learning on both the unary and pairwise terms.
We learn class-wise decision trees (potentials) for each object that appears in the image.
We show how to train the proposed CRFTree model efficiently. In particular, we combine the column generation and cutting-planes techniques to approximately solve the resulting optimization problem, which can involve exponentially many variables and constraints.
We empirically demonstrate that CRFTree outperforms existing methods for image segmentation. On both binary and multi-class segmentation datasets we show the advantages of the learned nonlinear nonparametric potentials of decision trees.
Related work We briefly review the recent works that are relevant to ours. A few attempts have been made to apply nonlinear kernels in SSVMs. Yu et al. and Severyn et al. developed sampled cuts based methods for training SSVMs with kernels. Sampled cuts methods were originally proposed for standard kernel SVMs. When applied to SSVMs, the performance is compromised . In , the image-mask pair kernels are designed to exploit image-level structural information for object segmentation. However, these kernels are restricted to the unary term. Although not in the large margin framework, the kernel CRFs proposed in  incorporates kernels into the CRFs learning. The authors only demonstrated the efficacy of their method on a synthetic and a small scale protein dataset. To sum up, these approaches are hampered by the heavy computation complexity. Furthermore, it is not a trivial task to design appropriate kernels for structured problems. Recently, Lucchi et al. 
proposed a two-step solution to tackle this problem. Specifically, they train linear SSVMs by using kernelized feature vectors that are obtained from training a standard non-linear kernel SVMs model. They experimentally demonstrate that the kernel transferred linear SVMs model achieves similar performance as the Gaussian SVMs. However, this approach is heuristic and it cannot be shown theoretically that their formulation approximates a nonlinear SSVMs model. Besides, their method consumes extra usage of memory and training time since the dimension of the transformed features equals to the number of support vectors, while the latter is linearly proportional to the size of the training data. Moreover, compared to the above mentioned works of  and , we achieve nonlinear learning on both the unary and the pairwise terms while theirs are limited to nonlinear unary potential learning. The recent work of Shen et al. generalizes standard boosting methods to structured learning, which shares similarities to our work here. However, our method bears critical differences from theirs: 1) We design a column generation method for non-linear tree potentials learning in CRFs directly from the SSVMs formulation. Different from the case in , which can directly derive column generation method analogous to LPBoost , our derivation here is more challenging. This is because we can not obtain the most violated constraint from the constraints of the dual problem, on which the column generation technique relies. We instead inspect the KKT condition to seek for the most violated constraint. This is an important difference compared to existing column generation techniques. 2) We develop a CRFs learning method for multi-class semantic segmentation, while  only shows CRFs learning for binary foreground/background segmentation. Our experiments on the MSRC-21 dataset shows that our method achieves state-of-the-art results. 3) We learn class-wise decision trees (potentials) for each object that appears in the image. This is different from . The work of decision tree fields  is close to ours in that they also use decision trees to model the pairwise potentials. The major difference is that in  potential functions are constructed by directly summing the energy tables associated with the set of nodes taken during evaluating the decision trees. Their trees are generally deep, with depth 15 for the unary potential and 6 for the pairwise potential in their experiment. By contrast, we model the potential functions as an ensemble of decision trees and learn them in the large margin framework. In our method, the decision trees are shallow and simple with binary outputs.
Ii Learning tree potentials in CRFs
We present the details of our method in this section by first introducing the CRFs models for segmentation, then formulating the energy functions and showing how to learn decision tree potentials in the large-margin framework.
Ii-a Segmentation using CRFs models
Before presenting our method, we first revisit how to use CRFs models to perform image segmentation. Given an image instance and its corresponding labelling , CRFs  models the conditional distribution of the form
where are parameters and is the normalization term. The energy of an image with segmentation labels over the nodes (superpixels) and edges , takes the following form:
Here ; and are the unary and pairwise potentials, both of which depend on the observations as well as the parameter . CRFs seeks an optimal labeling that achieves maximum a posterior (MAP), which mainly involves a two-step process : 1) Learning the model parameters from the training data; 2) Inferring a most likely label for the test data given the learned parameters. The segmentation problem thus reduces to minimizing the energy (or cost) over by the learned parameters , which is . When the energy function is submodular, this inference problem can be efficiently solved via graph cuts .
Ii-B Energy Formulation
Given the energy function in Eqn. (2), we show how to construct the unary and pairwise potentials using decision trees. We denote as the features of superpixel (), with its label , where is the number of classes. Let be a set of decision trees, which can be infinite. Each takes as the input, and takes a pair as the input to output . We introduce groups of decision trees, in which groups are for the unary potential and one group for the pairwise potential. For the unary potential, the groups of decision trees are denoted by , which correspond to categories. Each is associated with the -th class. In other words, for each class, we maintain its own unary feature mappings. Each group of decision trees for the unary potential can be written as: , which are the output of decision trees: . All decision trees of the unary potential are denoted by . Accordingly, for the pairwise potential, the group of decision trees is denoted by , and being the output of all . The whole set of decision trees is denoted by . We then construct the unary and pairwise potentials as
where is an indicator function which equals if the input is true and otherwise. Then the energy function in Eqn. (2) can be written as:
Next we show how to learn these decision tree potentials in the large-margin framework.
Ii-C Learning CRFs in the large-margin framework
Instead of directly minimizing the negative log-likelihood loss, we here learn the CRFs parameters in the large margin framework, similar to . Given a set of training examples , the large-margin based CRFs learning solves the following optimization:
where is a loss function associated with the prediction and the true label mask. In general, we have and for any . Intuitively, the optimization in Eqn. (6) is to encourage the energy of the ground truth label to be lower than that of any other incorrect labels by at least a margin .
To learn the potential functions we proposed in §II-B in the large-margin framework, we introduce the following definitions. For the unary part, we define , where stacks two vectors, and
denotes the tensor operation (e.g., ). Recall that denotes the -th superpixel of the image . Here, acts as the unary feature mapping. Clearly we have:
For the pairwise part, we define the pairwise feature mapping as:
Then we have the following relation:
We further define , and the joint feature mapping as
With the definitions of and , the energy function can then be written as:
Now we can apply the large-margin framework to learn CRFs using the proposed energy functions by rewriting the optimization problem in Eqn. (6) as:
Note that we add the constraint to ensure submodular property of our energy functions, which we will discuss the details later in §II-E. Up until now, we are ready to learn and (or ) in a single optimization problem formulated in Eqn. (II-C), but it is not clear how. Next we demonstrate how to solve the optimization problem in Eqn. (II-C) by using column generation and cutting-plane.
Ii-D Learning tree potentials using column generation
We aim to learn a set of decision trees and the potential parameter by solving the optimization problem in Eqn. (II-C). However, jointly learning and is generally difficult. Here we propose to apply column generation techniques [8, 30] to alternatively construct the set of decision trees and solve for . From the point of view of column generation techniques, the dimension of the primal variable is infinitely large; the column generation is to iteratively select (generate) variables for solving the optimization. In our case, infinitely many dimension of corresponds to infinitely many decision trees, thus we iteratively generate decision trees to solve the optimization.
Basically, we construct a working set of decision trees (denoted as ). During each column generation iteration we perform two steps. In the first step, we generate new decision trees and add them to . In the second step, we solve a restricted optimization problem in Eqn. (II-C) on the current working set to obtain the solution of . We repeat these two steps until convergence. Next we describe how to generate decision trees in a principal way by using the dual solution of the optimization in Eqn. (II-C), which is similar to the conventional column generation technique. First we derive the Lagrange dual problem of Eqn. (II-C), which can be written as
Here are the dual variables. When using column generation technique, one need to find the most violated constraint in the dual. However, the constraints of the dual problem do not involve decision trees . Instead of examining the dual constraint, we inspect the KKT condition, which is an important difference compared to existing column generation techniques. According to the KKT condition, when at optimal, the following condition holds for the primal solution and the current working set :
All of those generated satisfy the above condition. Obviously, generating new decision trees which most violate the above condition would contribute the most to the optimization of Eqn. (II-C). Hence the strategy of generating new decision trees is to solve the following problem:
Then is added to the current working set . If still satisfies the condition in Eqn. (15), the current solution of and is already the globally optimal one.
The optimization in Eqn. (16) for generating new decision trees can be independently decomposed into solving the unary part and the pairwise part. Hence can be written as: . For the unary part, we learn class-wise decision trees, namely, we generate decision trees corresponding to categories at each column generation iteration. Hence is composed of decision trees: . More specifically, according to the definition of in Eqn. (11), we solve the following problems:
To solve the above optimization problems, we here train
weighted decision tree classifiers. Specifically, when training decision trees for the-th class, the training data is composed of those superpixels whose ground truth label or predicted label is equal to the category label . Since the output of the decision tree is in and , the maximization in Eqn. (II-D) is achieved if outputs 1 for each of the superpixel with , and outputs 0 for each of the superpixel with . Therefore, as indicated by the horizontal curly braces in Eqn. (II-D), superpixels with the predicted labels of category are used as positive training examples, while superpixels with ground truth labels of category c are used as negative training examples. The dual solution serves as weightings of the training data.
For the pairwise part, we generate one decision tree in each column generation iteration, hence can be written as , the new decision tree for the pairwise part is generated as:
Similar to the unary case, we train a weighted decision tree classifier with as training example weightings. The positive and negative training data are indicated by the horizontal curly braces in Eqn. (II-D). is the response of a decision tree applied on the pairwise features constructed by two neighbouring superpixels (, ), e.g., color differences or shared boundary lengths.
With the above analysis, we can now apply column generation to jointly learn the decision trees and . The column generation (CG) procedure iterates the following two steps:
2) Add and to working set and resolve for the primal solution and dual solution .
We show two segmentation examples on the Oxford flower dataset produced by our method with different CG iterations in Fig. 1. As can be seen, our method refines the segmentation with the increase of CG iterations. Since this dataset is relatively simple, a few CG iterations are enough to get satisfactory results.
For solving the primal problem in the second step, it involves a large number of constraints due to the large output space . We next show how to apply the cutting-plane technique  to efficiently solve this problem.
Ii-E Speeding up optimization using cutting-plane
To apply cutting-plane for solving the optimization in Eqn. (II-C), we first derive its -slack formulation. The -slack SSVMs formulation was first introduced by . The -slack formulation of our method can be written as:
Cutting-plane methods work by finding the most violated constraint for each example
To deal with the unbalanced appearance of different categories in the dataset, we define as weighted Hamming loss, which weighs errors for a given class inversely proportional to the frequency it appears in the training data, as similar in . In the inference problem of Eqn. (20), when using the hamming loss as the label cost , the label cost term can be absorbed into the unary part. We therefore can apply Graph-cut to efficiently solve Eqn. (20). As for more complicated label cost functions, an efficient inference algorithm is proposed in . During each CG iteration, our method first solves Eqn. (II-D), (II-D) given the current and , and then solves a quadratic programming (QP) problem given . When solving Eqn. (II-D), (II-D), we train weighted decision tree classifiers using the highly optimized decision tree training method of .
Discussions on the submodularity
It is known that if graph cuts are to be applied to achieve globally optimum labelling in segmentation, the energy function must be submodular. For foreground/background segmentation in which a (super-)pixel label takes value in , we show that our method keeps this submodular property. It is commonly known that an energy function is submodular if its pairwise term satisfies: . Recall that our pairwise energy is written as . Clearly we have () because of the indicator function . The second thing is to ensure . Given the non-negativeness constraint we impose on in our model, and the output of decision trees in our method taking values from , we have and . We thus accomplish the proof of the submodularity of our model. In the case of multi-object segmentation, the inference is done by the -expansion of graph cuts.
Discussions on the non-negative constraint on
Our learning framework aligns with boosting methods, where we learn a non-negative weighted ensemble of weak structured learners (constructed by decision trees), which is analogous to weak learners in boosting methods. This is similar to boosting methods, such as AdaBoost, LPBoost , where the non-negative weighting is commonly used. Further, a weak structured learner generated by our column generation method is expected to make positive contribution to the learning objective. If it is of no use to the objective, the weight will approach zero. Therefore it is reasonable to enforce the non-negative constraint on .
To demonstrate the effectiveness of the proposed method, we first compare our model with some most related baseline methods, which are SVMs, AdaBoost and SSVMs. In section III-C, we show that our method achieves state-of-the-art results by exploiting recent advances in feature learning [6, 16].
Iii-a Experimental setup
The datasets evaluated here include three binary datasets (Weizmann horse, Oxford flower and Graz-02) and two multi-class datasets (MSRC-21 and PASCAL VOC 2012). The Weizmann horse dataset111http://www.msri.org/people/members/eranb/ consists of 328 horse images from various backgrounds, with groundtruth masks available for each image. We use the same data split as in  and . The Oxford 17 category flower dataset  is composed of 849 flower images. Those with too small foreground are removed, which leaves 753 for segmentation purpose . The data split stated in  is used to perform the evaluation. During our experiment, images of the Weizmann horse and the Oxford flower datasets are resized to 256256. The Graz-02 dataset222http://www.emt.tugraz.at/~pinz/ contains 3 categories (bike, car and people). This dataset is considered challenging as the objects appear at various background and with different poses. We follow the evaluation protocol in  to use 150 for training and 150 for testing for each category. The MSRC-21 dataset  is a popular multi-class segmentation benchmark with 591 images containing objects from 21 categories. We follow the standard split to divide the dataset into training/validation/test subsets. The PASCAL VOC 2012 dataset 333http://host.robots.ox.ac.uk/pascal/VOC/voc2012/ is a widely used benchmark for semantic segmentation, which contains 2913 images from the trainval set and 1456 images from the test set, making up 21 categories. Unlike many state-of-the-arts methods such as , we do not use any additional training data for this dataset.
We start with over-segmenting the images into superpixels using SLIC , with 700 superpixels generated per image. We extract dense SIFT descriptors and color histograms around each superpixel centroid with different block sizes (1212, 2424, 3636). The dense SIFT descriptors are then quantized into bag-of-words features using nearest neighbour search with a codebook size of 400. We construct four types of pairwise features also using different block sizes to enforce spatial smoothness, which are color difference in LUV space, color histogram difference, texture difference in terms of LBP operators as well as shared boundary length . The column generation iteration number of our CRFTree is set to 50 based on a validation set. We learn tree potentials with the tree depth being . Training on the MSRC-21 dataset on a standard PC machine takes around 16 hours.
Iii-B Comparing with baseline methods
We first compare CRFTree with some conventional methods, which are linear SVMs, AdaBoost and SSVMs to demonstrate the superiority of our method. For SVMs and AdaBoost, each superpixel is classified independently without CRFs. We mainly evaluate on the more challenging Graz-02 and MSRC-21 dataset in this part. The regularization parameter C of SVMs, SSVMs and our CRFTree are selected from based on a validation set. We use depth-2 decision trees for training AdaBoost and our CRFTree. The maximum iteration number of AdaBoost is chosen from 50, 100, 200. For our method, we treat the foreground and background as two categories in the binary case to learn class-wise potentials.
For a comprehensive evaluation, we use two measurements to quantify the performance on the Graz-02 dataset, which are intersection over union score and the pixel accuracy (including foreground and background). We report the results in Table I. As can be observed, AdaBoost based on a depth-2 decision tree performs better than the linear SVMs. On the other hand, structured methods which jointly consider local information and spatial consistency are able to significantly outperform the simple binary models. By introducing nonlinear and class-wise potential learning, our method is able to gain further improvement over SSVMs.
We learn class-wise potentials using our CRFTree for each of the 21 classes on the MSRC dataset. The compared results are summarized in Table II (upper part). Similar conclusions can be drawn as on the Graz-02 dataset and our CRFTree again outperforms all its baseline competitors.
|intersection/union (foreground, background)()|
|SVMs||67.8 (51.9, 83.8)||69.7 (46.8, 92.6)||65.0 (44.5, 85.5)|
|AdaBoost||71.2 (57.6, 84.9)||71.0 (49.4, 92.6)||67.7 (48.7, 86.7)|
|SSVMs||72.2 (58.6, 85.8)||76.9 (60.0, 94.2)||70.9 (53.8, 87.9)|
|CRFTree||76.4 (65.0, 87.8)||79.5 (64.0, 95.0)||74.2 (58.7, 89.7)|
|CRFTree (FL)||78.3 (67.7, 88.9)||83.0 (70.1, 95.9)||75.7 (61.0, 90.5)|
|pixel accuracy (foreground, background)()|
|SVMs||79.5 (67.4, 91.5)||77.3 (57.2, 97.3)||77.7 (63.8, 91.6)|
|AdaBoost||83.8 (77.3, 90.3)||80.1 (63.5, 96.6)||80.5 (69.0, 91.9)|
|SSVMs||83.8 (76.1, 91.6)||85.5 (73.8, 97.2)||83.9 (75.8, 92.1)|
|CRFTree||87.8 (83.9, 91.8)||87.0 (76.4, 97.7)||85.9 (78.4, 93.4)|
|CRFTree (FL)||89.1 (85.8, 92.4)||90.0 (82.1, 98.0)||86.9 (80.0, 94.0)|
|Shotton et al.||49||88||79||97||97||78||82||54||87||74||72||74||36||24||93||51||78||75||35||66||18||67||72|
|Ladicky et al.||80||96||86||74||87||99||74||87||86||87||82||97||95||30||86||31||95||51||69||66||9||75||86|
|Gonfaus et al.||60||78||77||91||68||88||87||76||73||77||93||97||73||57||95||81||76||81||46||56||46||75||77|
|Lucchi et al.||59||90||92||82||83||94||91||80||85||88||96||89||73||48||96||62||81||87||33||44||30||76||82|
|Lucchi et al.||67||89||85||93||79||93||84||75||79||87||89||92||71||46||96||79||86||76||64||77||50||78.9||83.7|
Iii-C Comparing with state-of-the-art methods
Since features play a pivotal role in the performance of vision algorithms, we exploit recent advances in feature learning to pursue state-of-the-art results, i.e., unsupervised feature learning 
and convolutional neural networks (CNN). Specifically, for the unsupervised feature learning, we first learn a dictionary of size 400 and patch size 66 based on the evaluated image dataset using Kmeans, and then use the soft threshold coding 
to encode patches extracted from each superpixel block. The final feature vectors (we call it encoding feature here) are obtained by performing a three-level max pooling over the superpixel block. For the CNN features, we use the Alex model
trained on the ImageNet444http://image-net.org to generate CNN features. These two versions of our method are denoted as CRFTree (FL) and CRFTree (CNN) respectively. We only report the results of CRFTree (CNN) on the MSRC-21 and PASCAL VOC 2012 datasets since our method already performs very well by using the encoding features on the three binary datasets.
|Levin & Weiss ||95.5||-|
|Bertelli et al.||94.6||80.1|
|Kuttel et al.||94.7||-|
We quantify the performance by the global pixel-wise accuracy and the foreground intersection over union score , as did in . measures the percentage of pixels correctly classified while directly reflects the segmentation quality of the foreground. The results are reported in Table III. Our method performs better than the kernel structural learning method of , which may result from the fact that they only introduced nonlinearity into the unary part while our method achieves nonlinearity on both unary and pairwise terms. The best score is obtained by . However their method relies on an assumption that a perfect bounding box of the horse is available for each test image, which is not practically applicable. On the contrary, we provide a principal and general way of nonlinearly learning CRFs parameters. We show some segmentation examples of our method in Fig. 2.
As in , we also use and to measure the performance on the Oxford flower dataset, and report the results in Table IV. Our method performs comparable to the original work of  on this dataset in terms of while again obtains better results than the closely related state-of-the-art work of . It is also worth noting that the method in  is very domain specific, which relies on modelling the flower’s shape (center and petal), while ours is generally applicable.
, we also evaluate the F-score on the Graz-02 dataset besides the above mentioned intersection over union score and pixel accuracy. The F-score is defined as, where is the precision and is the recall. We summarize the results in Table V and Table I. From Table V, it can be seen that our method significantly outperforms all the compared methods, which fully demonstrate the power of nonlinear and class-wise potential learning. Furthermore, we can observe from Table I that compared with the previous results, adding more features help to improve the performance.
|Marszalek & Schimid ||61.8||53.8||44.1||53.2|
|Fulkerson et al.||66.4||54.7||51.4||57.5|
|Aldavert et al.||71.9||62.9||58.6||64.5|
|Kuettel et al.||63.2||74.8||66.4||68.1|
The compared results with state-of-the-art works are reported in the lower part of Table II. As we can see, by incorporating more advanced features, our CRFTree gains significant improvements over the previous results which only use bag-of-words and color histogram features. It is worth noting that our method performs better than the closely related work of Lucchi et al. which claims exploiting non-linear kernels. It has to be pointed out that we did not employ any global potentials (while in , they improve the global and average per-category accuracy from 70, 73 to 82 and 76 by adding global information). If global or higher potentials are incorporated into our model, further performance promotion can be expected. We show some qualitative evaluation examples in Fig. 3.
Pascal Voc 2012
We generate deep features of each superpixel by averaging the pixel-wise feature map scores within the superpixel obtained from a pretrained FCN model
. We then train our CRFTree model on the standard PASCAL VOC 2012 training dataset with the generated deep features. Following the standard evaluation procedure for the Pascal VOC challenge, we upload our segmentation results to the test server and use the average intersection over union as the evaluation metric. We compare against several state-of-the-art methods (, , , ) on the test set of the PASCAL VOC 2012 dataset. The results are reported in Table VI. As seen from the table, our CRFTree beats the Hypercolum  and the CFM  and outperforms the FCN  by a notable margin. Although our method is triumphed by , it should be noted that their result is obtained by using extra training data (11,685 images vs 1456 images used for training our CRFTree). Some qualitative evaluation examples of our method are illustrated in Fig. 4.
|Zheng et al.||72.0|
Nonlinear structured learning has been a promising yet challenging topic in the community. In this work, we have proposed a nonlinear structured learning method of tree potentials for image segmentation. The unary and pairwise potentials are ensembles of class-wise trees, with the ensemble parameters and the trees jointly learned in a unified large-margin framework. In this way, nonlinearity is easily introduced into the CRFs learning. The resulted model involves exponential number of variables and constraints. We therefore derive a novel algorithm combining a modified column generation method and the cutting-plane technique for efficient model training. We have exemplified the superiority of the proposed nonlinear potential learning method by comparing against state-of-the-art methods on both binary and multi-class object segmentation datasets. A potential disadvantage of our method is that it is prone to overfitting due to the outstanding non-linear learning capacity. This can be alleviated by using more training data. On the other hand, as we show in Table II
, our method using pre-trained CNN features has shown the best performance. Therefore it is worth exploiting to further combine our method with deep learning techniques in the future work.
-  R. Achanta, K. Smith, A. Lucchi, P. Fua, and S. S sstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.
D. Aldavert, A. Ramisa, R. L. de M ntaras, and R. Toledo.
Fast and robust object segmentation with the integral linear
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2010.
R. Appel, T. J. Fuchs, P. Dollár, and P. Perona.
Quickly boosting decision trees - pruning underachieving features
Proceedings of International Conference on Machine Learning, 2013.
-  A. Bauer, S. Nakajima, and K.-R. Müller. Efficient exact inference with loss augmented objective in structured learning. IEEE Transactions on Neural Networks and Learning Systems, 2016.
-  L. Bertelli, T. Yu, D. Vu, and B. Gokturk. Kernelized structural SVM learning for supervised object segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2011.
-  A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In Proceedings of International Conference on Machine Learning, 2011.
-  J. Dai, K. He, and J. Sun. Convolutional feature masking for joint object and stuff segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015.
-  A. Demiriz, K. P. Bennett, and J. Shawe-Taylor. Linear programming boosting via column generation. Machine Learning, 2002.
-  B. Fulkerson, A. Vedaldi, and S. Soatto. Localizing objects with smart dictionaries. In Proceedings of European Conference on Computer Vision, 2008.
-  B. Fulkerson, A. Vedaldi, and S. Soatto. Class segmentation and object localization with superpixel neighborhoods. In Proceedings of IEEE International Conference on Computer Vision, 2009.
-  J. M. Gonfaus, X. B. Bosch, J. van de Weijer, A. D. Bagdanov, J. S. Gual, and J. G. Sabaté. Harmony potentials for joint classification and segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2010.
-  B. Hariharan, P. A. Arbeláez, R. B. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015.
-  T. Joachims. Training linear SVMs in linear time. In Proceedings of ACM SIGKDD ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2006.
-  A. Joulin, F. R. Bach, and J. Ponce. Discriminative clustering for image co-segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2010.
-  M. Kim. Mixtures of conditional random fields for improved structured output prediction. IEEE Transactions on Neural Networks and Learning Systems, 2016.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Proceedings of Advances in Neural Information Processing Systems, 2012.
-  D. Kuettel and V. Ferrari. Figure-ground segmentation by transferring window masks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2012.
-  L. Ladicky, C. Russell, P. Kohli, and P. H. S. Torr. Associative hierarchical crfs for object class image segmentation. In Proceedings of IEEE International Conference on Computer Vision, 2009.
-  J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of International Conference on Machine Learning, 2001.
-  J. Lafferty, X. Zhu, and Y. Liu. Kernel conditional random fields: Representation and clique selection. In Proceedings of International Conference on Machine Learning, 2004.
-  A. Levin and Y. Weiss. Learning to combine bottom-up and top-down segmentation. In Proceedings of European Conference on Computer Vision, 2006.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2015.
-  A. Lucchi, Y. Li, and P. Fua. Learning for structured prediction using approximate subgradient descent with working sets. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2013.
-  A. Lucchi, Y. Li, K. Smith, and P. Fua. Structured image segmentation using kernelized features. In Proceedings of European Conference on Computer Vision, 2012.
-  M. Marszalek and C. Schmid. Accurate object localization with shape masks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2007.
-  M.-E. Nilsback and A. Zisserman. Delving deeper into the whorl of flower segmentation. Image Vision Computing, 2010.
-  S. Nowozin, P. Gehler, and C. H. Lampert. On parameter learning in CRF-based approaches to object class image segmentation. In Proceedings of European Conference on Computer Vision, 2010.
-  S. Nowozin, C. Rother, S. Bagon, T. Sharp, B. Yao, and P. Kohli. Decision tree fields. In Proceedings of IEEE International Conference on Computer Vision, 2011.
-  A. Severyn and A. Moschitti. Fast support vector machines for structural kernels. In Proceeding of European Conference on Machine Learning & Knowledge Discovery in Databases, 2011.
-  C. Shen and Z. Hao. A direct formulation for totally-corrective multi-class boosting. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2011.
-  C. Shen, G. Lin, and A. van den Hengel. Structboost: Boosting methods for predicting structured output variables. CoRR, 2013.
-  J. Shotton, M. Johnson, and R. Cipolla. Semantic texton forests for image categorization and segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2008.
-  I. Steinwart. Sparseness of support vector machines. Journal of Machine Learning Research, 2003.
-  M. Szummer, P. Kohli, and D. Hoiem. Learning CRFs using graph cuts. In Proceedings of European Conference on Computer Vision, 2008.
-  B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In Proceedings of Advances in Neural Information Processing Systems, 2004.
-  I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 2005.
-  C.-N. Yu and T. Joachims. Training structural SVMs with kernels using sampled cuts. In Proceedings of ACM SIGKDD ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2008.
S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang,
and P. Torr.
Conditional random fields as recurrent neural networks.In Proceedings of IEEE International Conference on Computer Vision, 2015.