We advocate regularized loss functions for weakly-supervised training of semantic CNN segmentation. The use of unsupervised loss terms acting as regularizers on the output of deep-learning architectures is a principled approach to exploit structure similarity of partially labeled data [1, 2]. Surprisingly, this general idea was largely overlooked in weakly-supervised CNN segmentation where current methods often introduce computationally expensive MRF/CRF layers or post-processing inference steps generating “fake” full masks from partial input.
We propose to use (relaxations of) MRF/CRF terms directly inside the loss avoiding explicit guessing of full training masks. This approach follows well-established ideas for weak supervision in deep learning [1, 2] and continues our recent work  that proposed the integration of standard objectives in shallow111In this paper “shallow” refers to standard segmentation methods unrelated to CNNs. segmentation directly into loss functions. While  is entirely focused on the normalized cut loss motivated by a popular balanced segmentation criterion , we now study a different class of regularized losses including (relaxations of) standard MRF/CRF potentials. While they are common as shallow regularizers [5, 6, 7, 8] or as trainable layers , they were never used directly as losses.
We propose and evaluate several new losses motivated by MRF/CRF potentials and their combination with balanced partitioning criteria . Such losses can be adapted to many forms of weak (or semi-) supervision based on diverse existing MRF/CRF formulations for interactive graph cut segmentation. But, the scope of this paper is limited to training with partial (user scribble) masks where regularized losses combined with cross entropy over the partial masks achieve the state-of-the-art close to full-supervision quality.
Besides basic Potts model , we use popular fully connected pairwise CRF potentials of Krähenbühl and Koltun , often referred to as dense CRF. In conjunction with CNNs dense CRFs have become the de-facto choice for semantic segmentation in the contexts of fully [11, 12, 9] and weakly/semi [13, 14, 15]supervised learning. For instance, DeepLab 
popularized dense CRF as a post-processing step. In fully supervised setting, integrating the unary scores of a CNN classifier and the pairwise potentials of dense CRF achieve competitive performances. This is facilitated by fast mean-field inference techniques for dense CRF based on high-dimensional filtering .
Weakly supervised semantic segmentation is commonly addressed by mimicking full supervision via synthesizing fully-labeled training masks (proposals) from the available partial inputs [15, 14, 17]. These schemes typically iterate two steps: CNN training and proposal generation via regularization-based shallow interactive segmentation, e.g. graph cut  or dense CRF mean-field inference [15, 14]. In contrast, our approach avoids explicit inference steps by integrating shallow regularizers directly into the loss functions. Section 3 makes some interesting connections between proposal-generation and our regularized losses.
For simplicity, this paper uses a very basic quadratic relaxation of discrete MRF/CRF potentials, even though there are many alternatives, e.g. TV-based  and convex formulations [19, 20], relaxations , LP and other relaxations [22, 23]. Evaluation of different relaxations in the context of regularized weak supervision losses is left for future work. Our main contributions are:
We propose and evaluate several regularized losses for weakly supervised CNN segmentation based on Potts , dense CRF , and kernel cut  regularizers (Sec.2). Our approach avoids explicit inference steps as in proposal-based methods. This continues the study of losses motivated by standard shallow segmentation energies started in  with normalized cut loss.
We show that iterative proposal-generation schemes for weak supervision, which alternate CNN learning and mean-field inference, can be viewed as an approximate alternating direction optimization of regularized losses (Sec.3).
Comprehensive experiments (Sec.4) with our regularized weakly supervised losses show (1) state-of-the-art performance for weakly supervised CNN segmentation reaching near full-supervision accuracy and (2) better quality and efficiency than proposal generating methods or normalized cut loss . Alternating schemes (proposal generation) give higher loss at convergence.
2 Our Regularized Semi-supervised Losses
This section introduces our regularized losses for weakly-supervised segmentation. In general, the use of regularized losses is a well-established approach in semi-supervised deep learning [1, 2]. We advocate this principle for semantic CNN segmentation, propose specific shallow regularizers for such losses, and discuss their properties.
Assuming image and its partial ground truth labeling or mask , let be the output of a segmentation network parameterized by . In general, CNN training with our joint regularized loss corresponds to optimization problem of the following form
where is a ground truth loss and is a regularization term or regularization loss. Both losses have argument , which is -way softmax segmentation generated by a network. Using cross entropy over partial labeling as the ground truth loss, we have the following joint regularized semi-supervised loss
where is the set of labeled pixels and is the cross entropy between network predicted segmentation (a row of matrix corresponding to point ) and ground truth labeling .
In principle, any differentiable function can be used as a loss. This paper studies (relaxations of) regularizers from shallow segmentation as loss functions. Section 2.1 details our MRF/CRF loss and its implementation. In Section 2.2, we propose kernel cut loss combining CRF with normalized cut terms and justify this combination.
2.1 Potts/CRF Losses
Assuming that segmentation variables are restricted to binary class indicators , the standard Potts model  could be represented via Iverson brackets , as on the left hand side below
where is a matrix of pairwise discontinuity costs or an affinity matrix. The right hand side above is a particularly straightforward quadratic relaxation of the Potts model that works for relaxed corresponding to a typical soft-max output of CNNs. In fact, this quadratic function is very common in the general context of regularized weakly supervised losses in deep learning .
expressed in terms of support vectors for each label, i.e. columns of the segmentation matrix . For discrete segment indicators (4) gives the cost of a cut between segments, same as the Potts model on the left hand side of (3), but it differs from the relaxation of the right hand side of (3).
The affinity matrix can be sparse or dense. Sparse commonly appears in the context of boundary regularization and edge alignment in shallow segmentation . With dense Gaussian kernel (4) is a relaxation of DenseCRF . The implementation details including fast computation of the gradient (11) for CRF loss with dense Gaussian kernel is described in Sec. 4.
2.2 Kernel Cut Loss
Besides the CRF loss (4), we also propose its combination with normalized cut loss  where each term is a ratio of a segment’s cut cost (Potts model) over the segment’s weighted size (normalization)
which is motivated by kernel cut shallow segmentation  with complementary benefits of balanced normalized cut partitioning and object boundary regularization or edge alignment as in Potts model. While the kernel cut loss is a high-order objective, its gradient (12) can be efficiently implemented, see Sec. 4.
This paper compares experimentally CRF, normalized cut and kernel cut losses for weakly supervised segmentation. In our experiments, the best weakly supervised segmentation is achieved with kernel cut loss.
Note that standard normalized cut and CRF objectives in shallow segmentation require fairly different optimization techniques (e.g. spectral relaxation or graph cuts), but the standard gradient descent approach for optimizing losses during CNN training allows significant flexibility in including different regularization terms, as long as there is a reasonable relaxation.
3 Connecting proposals generation and loss optimization
The main stream of weakly-supervised methods generate segmentation proposals and train with such ”fake” ground truth [17, 25, 26, 13, 14, 27]. In fact, many off-line shallow interactive segmentation techniques can be used to propagate labels and generate masks, e.g. graph cuts [6, 7], random walker [28, 21], etc. However, training is vulnerable to mistakes in the proposals. While alternating proposal generation and network training  may improve the quality of the proposals, errors reinforce themselves in such self-taught learning scheme . Rather than training networks to fit potential errors, our regularized semi-supervised loss framework is more direct and principled [29, 1].
In this section, we show that proposal methods can be viewed as approximate alternating direction method (ADM) for optimization , which does not account directly for network variables in the ADM splitting. This optimization insight suggests that expressing very popular regularization terms, for instance, dense CRF, explicitly in terms of the network variables and performing direct back-propagation could be a better optimization alternative to the existing proposal generation methods, in both the quality of the obtained solutions and efficiency. Our optimization results confirm this, e.g. see the CRF loss plot in Fig. 3 and the training times in Table 3.
We consider proposal-generation schemes iterating between two steps, network training and proposal generation. Then alternation can happen either when training converges or online for each batch. At each iteration, the first step learns the network parameters from a given (fixed) ground-truth proposal computed at the previous iteration. This amounts to updating the K-way softmax segmentation to by minimizing the following proposal-based cross entropy with respect to parameters via standard back-propagation:
where are the ground-truth proposals for unlabeled pixels . Mask is constrained to be equal to for labeled pixels . The second step fixes the network output and finds the next ground-truth proposal by minimizing regularization functionals that are standard in shallow segmentation:
denotes latent pixel labels within the probability simplex. Note that for fixedthe cross entropy terms in (8) are unary potentials for . When corresponds to dense CRF, optimization of (8) is facilitated by fast mean-field inference techniques [8, 31] significantly reducing the computational times via parallel updates of variables and high-dimensional filtering . Appendix 0.A shows that mean-field algorithms can be equivalently interpreted as a convex-concave approach to optimizing the following objective
combining (8) and negative entropies that act as a simplex barrier for variables . This yields closed-form independent (parallel) updates of variables , while ensuring convergence under some conditions222Parallel updates are guaranteed to converge for concave CRF models, e.g. Potts ..
Proposal methods alternating steps (9) and (7) can be viewed as approximate alternating direction method (ADM)333In its basic form, alternating direction method transforms problem into and alternates optimization over and . This may work if optimizing and seperately is easier than the original problem.  for optimizing our regularized loss (2) using the following decomposition of the problem:
where denotes the Kullback-Leibler divergence.
denotes the Kullback-Leibler divergence.
Instead of optimizing directly regularized loss (2) with respect to network parameters, proposal methods splits the optimization problem into two easier sub-problems in (10). This is done by replacing the network softmax outputs in the regularization by latent distributions (the proposals) and minimizing a divergence between and , which is KL in this case. This is conceptually similar to the general principles of ADM , except that the splitting is not done directly with respect the variables of the problem (i.e., parameters ) but rather with respect to network outputs . This can be viewed as an approximate ADM scheme, which does not account directly for variables in the ADM splitting. Note that the method in  generates proposals via dense CRF layer, but their approach slightly deviates from the described ADM scheme since they also back-propagate through this layer444Cross-entropy loss in  uses CRF layer proposal generated from network output . Dependence of on motivates back-propagation for this layer.. But, as we show in Table 3, such back-propagation does not help and can be dropped. Moreover, our direct optimization of regularized losses makes such proposal generating layers (or procedures) entirely redundant. Our approach gives simpler and more efficient training avoiding expensive iterative inference  and obtaining better performance.
Sec. 4.1 is the main experimental result of this paper. For weakly-supervised segmentation with scribbles , we train with different regularized losses. The experiments cover our proposed CRF loss, high-order normalized cut loss in  and kernel cut loss, as discussed in Sec. 2. We show that combining CRF (4) with normalized cut (5) a la KernelCut  yields the best performance.
In Sec. 4.2, using direct loss and using generated proposals for training are compared. In the light of the technical connection of the two schemes from optimization perspective in Sec. 3, we also evaluate how “regularized” are the segmentations obtained by computing the regularization energy. Besides for scribbles, we also utilize our regularized loss framework for image-level labels based supervision and compare to SEC , a recent method based on proposal generation. Our method achieved the state-of-the-art for weakly supervised segmentation with scribbles or image-level labels.
We also investigate if regularization loss will facilitate fully or semi-supervised segmentation with unlabeled images. Some preliminary results are given in Sec. 4.3 for these extensions.
Dataset Most experiments are on the PASCAL VOC12 segmentation dataset. For all method, we train with the augmented dataset of 10,582 images. The scribble annotations for these training images are from . Following standard protocol, mean intersection-over-union (mIoU) over the 21 classes is evaluated on the val set that contains 1,449 images. For image-level label supervision, our experiment setup and dataset is the same as that used in .
Implementation details Our implementation is based on DeepLab v2 . We follow the learning rate strategy in DeepLab v2 555https://bitbucket.org/aquariusjay/deeplab-public-ver2
for the baseline with full supervision. For our method with regularized loss, we first train with partial cross entropy loss only for the seeds. Then we fine-tune with extra regularized losses of different types for the same number of iterations. Our CRF and normalized cut regularization losses are defined at full image resolution. If the network outputs shrinked labeling, which is typical, the labeling is interpolated to original resolution before feeding into the loss layer.
We choose dense Gaussian kernel over RGBXY channels for , and . As hyper-parameter, the Gaussian bandwidth is optimized via validation for DenseCRF, normalized cut and kernel cut. As is also mentioned in , naive forward and backward pass of such fully-connected pairwise or high-order loss layer would be prohibitively slow ( for pixels). For example, to implement (4) as a loss, we need to compute its gradient w.r.t.
For DenseCRF where is fully connected Gaussian, computing the gradient (11) becomes a standard Bilateral filtering problem, for which many fast methods were proposed [16, 32]. We implement our loss layers using fast Gaussian filtering , which is also utilized in the inference of DenseCRF [8, 9]. Using the same fast filtering component, we can also computer the following gradient (12) of our Kernel Cut loss (6) in linear time. Note that our CRF and KC loss layer is much faster than CRF inference layer [13, 9] since no iterations is needed.
4.1 Comparison of regularized losses
|CE only||w/ NC ||w/ CRF||w/ KernelCut|
|DeepLab-MSc-largeFOV||56.0 (8.1)||60.5 (3.6)||63.1 (1.0)||63.5 (0.6)||64.1|
|DeepLab-MSc-largeFOV+CRF||62.0 (6.7)||65.1 (3.6)||65.9 (2.8)||66.7 (2.0)||68.7|
|DeepLab-VGG16||60.4 (8.4)||62.4 (6.4)||64.4 (4.4)||64.8 (4.0)||68.8|
|DeepLab-VGG16+CRF||64.3 (7.2)||65.2 (6.3)||66.4 (5.1)||66.7 (4.8)||71.5|
|DeepLab-ResNet101||69.5 (6.1)||72.8 (2.8)||72.9 (2.7)||73.0 (2.6)||75.6|
|DeepLab-ResNet101+CRF||72.8 (4.0)||74.5 (2.3)||75.0 (1.8)||75.0 (1.8)||76.8|
Tab. 1 summaries the results with different regularized losses. Here we report both result with or without CRF post-processing on various networks. The baselines are with cross entropy losses of full labeled masks or partial seeds ignoring unlabeled region. We choose the weight of the regularization term to achieve the best validation accuracy. The state-of-the-art of scribble-based segmentation is from prior work  with extra normalized cut loss. Consistently over different networks, using the proposed CRF loss outperforms that with normalized cut loss. Our best result is obtained when combining both normalized cut loss and DenseCRF loss. Clearly, utilization of CRF loss and KernelCut loss reduce the gap toward the full supervision baseline. With DeepLab-MSc-largeFOV followed by CRF post processing, using KernelCut regularized loss achieved mIOU of 66.7%, while previous best is 65.1% with normalized cut loss . Our result with scribbles approaches 97.6% of the quality of that with full supervision, yet only 3% of all pixels are scribbled. This paper pushes the limit of weakly supervised segmentation.
To get some intuition about these losses and their regularization effect, we visualize their gradient w.r.t. segmentation in Fig. 1. Note that the sign of gradients indicates whether to encourage or discourage certain labeling. The color coded gradients clearly show evidence toward better color clustering /edge alignment/ object separation with regularized loss. The gradients of different losses are slightly different. Since kernel cut is the combination of normalized cut with CRF, then its gradient is the sum of that of each.
Fig. 2 shows some qualitative examples with different losses. Results with regularized loss is better than that without. Besides, the segmentation with kernel cut loss have better edge alignment compared to that with normalized cut loss. This is because of the extra pairwise CRF loss. The effect of CRF loss and normalized cut loss is different. Our Kernel Cut loss combines the benefit of both regional color clustering (normalized cut) and pairwise regularization (DenseCRF). By combining both we can achieve better segmentation regularization.
4.2 Direct loss vs proposal generation
Here we compare our direct loss and proposal generation methods (Sec. 3) in weakly supervised setting mainly focusing on scribbles. Proposals can be generated offline or online. One straightforward proposal method is to treat GrabCut output as “fake” ground truth for training. ScribbleSup  refines GrabCut output using network predicted segmentation as unary potentials. The proposals are updated but are generated offline. By online proposal generation, we let network output go through a CRF inference layer during training at each iteration. The loss for proposal generation is the cross entropy between the input and output of the CRF inference layer, see Sec.3. A recent work that generates proposals online for tag-based weakly-supervised segmentation is SEC .
|proposal generation||direct loss|
Table 2 compares our direct loss method to proposal generation variants above. We used the public implementation of SEC’s constrain-to-boundary loss666https://github.com/kolesman/SEC that combines explicit dense CRF proposal layer and cross entropy loss between the proposal and network output. We report the results for SEC, our adaptation of tag-based SEC to weak-supervision with scribbles from . We find that (frequent) online proposal updates give better results than those with fixed proposals. Compared to our direct loss method, (online) proposal generation gives inferior segmentation accuracy over different networks, see Table 2.
We further evaluate online proposal generation. Figure 3 compares it to our regularized loss method in terms of segmentation accuracy and obtained loss values. Even though the proposal generation scheme indirectly minimizes our regularized loss, such training scheme gives higher loss values than those obtained with our direct loss minimization. Also, direct loss minimization gives higher mIOUs for the training and validation.
As mentioned earlier, SEC  was originally focused on tag-based supervision and Table 3 reports some tests for that form of weak supervision. We compare SEC with its simplification replacing their constrain-to-boundary loss by our direct regularization loss. We train using different combinations of losses for supervision based on image-level labels/tags. Our CRF loss helps to improve training to 43.9% compared to 38.4% without it. There is only small improvement in segmentation mIOU when replacing constrain-to-boundary loss by CRF loss. However, the direct loss layer is several times faster than SEC integrating explicit proposal layer. The segmentation accuracy and overall training speed are also reported in Tab. 3. The results are for the DeepLab-largeFOV network since it is fast to train. We also tested a variant of SEC without (CRF) proposal layer back-propagation, which we show is redundant in practice.
Fig. 4 shows testing examples for our method and SEC with image tags as supervision. Using direct loss rather than the constrain-to-boundary loss gives similar segmentation, while being faster to train since no inference is needed.
|include this loss?|
|Losses||Seeding loss ||✓||✓||✓||✓|
|Expansion loss ||✓||✓||✓||✓|
|Constrain-to-boundary loss ||✓|
|Our direct CRF loss||✓|
|Overall training time in s/batch||0.86||1.19 (0.33)||1.19 (0.33)||0.98 (0.12)|
To see the limit of our algorithm with scribble supervision, we train with shortened scribbles visualized in Fig. 5. Note that with length zero, there is only one click or spot for each object. For different length ratios from zero to 100%, our direct loss method achieved much better segmentation than ScribbleSup , see Fig. 6. The improvement over ScribbleSup  is more significant for shorter scribbles or even clicks.
4.3 Fully and semi supervised segmentation
We’ve demonstrated the usefulness of regularized loss for weakly supervised segmentation. Here we test if it also helps full supervision or semi-supervision with extra unlabeled images. For full supervision, we add NC loss on labeled masks besides the cross entropy loss. This experiment is on a simple saliency dataset  where color clustering is obvious and likely to help. As shown in Tab. 4, when we increase the weight of , we indeed obtained segmentation that is more regularized. However, with extra regularization loss during training, the cross entropy loss got worse and mIOU decreased. The conclusion is that imposing regularized loss naively on labeled images doesn’t help fully supervision segmentation. Empirical risk minimization is in some sense optimal for fully labeled data. Extra regularization loss steers the network in the wrong direction if the regularization doesn’t totally agree with the ground truth. Reporting this result though negative helps to complete our investigation of regularized loss for fully, weakly and semi-supervised settings.
|NC loss weight||mIOU||cross entropy loss||NC loss|
For training with both labeled images and unlabeled images, our joint losses include cross entropy on labeled images and regularization on unlabeled ones. The 11K labeled images are from PASCAL VOC 2012 and the 10K unlabeled ones are from VOC 2007. We train DeepLab-LargeFOV with different amount of labeled & unlabeled images, see Tab. 5. For the baseline that can only utilize labeled images, the performance degrades with less masks, as expected. For our framework, the labeled and unlabeled images are mixed and randomly sampled in each batch. We observed 0.7% 1.9% improvement with our regularized loss. Note that this result is highly preliminary and detailed analysis of overfitting, generalization property and comparison to recent semi-supervised segmentation  with extra unlabeled images will be our future work.
|training data||# of labeled images||11K||11K||7K||5K||3K|
|# of unlabeled images||10K||0||4K||6K||8K|
|losses||cross entropy only||63.5%||63.5%||61.5%||60.1%||57.6%|
|cross entropy + CRF reg.||64.6%||63.5%||63.4%||61.8%||58.3%|
5 Conclusion and Future Work
Regularized semi-supervised loss is a principled approach to semi-supervised deep learning [1, 2], in general. We utilize such principle for weakly supervised CNN segmentation. In particular, this paper is continuation of the study of losses motivated by standard shallow segmentation . While  is entirely on normalized cut loss, in this paper we propose and evaluate several regularized loss for weakly-supervised CNN segmentation based on Potts/CRF [5, 8], normalized cut  and KernelCut  regularizer. DenseCRF  is very popular as post-processing  or trainable layer  for CNN segmentation. We are the first to use a relaxed version of DenseCRF directly as part of the loss.
In contrast to our direct regularized loss approach, the main stream in weakly supervised segmentation rely on generating ”fake” full masks from partial input and train a network to match the proposals [17, 25, 26, 13, 14, 27]. Proposals can be pre-computed or iteratively updated. Some work even back-propagate the proposal generation step [25, 13]. We show that proposal methods can be viewed as approximate alternating direction method (ADM) for optimization of our direct loss. Using direct loss gives better optimization while being more efficient than proposal generation scheme since no CRF inference is needed.
This paper pushes the limit of weakly-supervised segmentation. Comprehensive experiments (Sec.4) with our regularized weakly supervised losses show (1) state-of-the-art performance for weakly supervised CNN segmentation reaching near full-supervision accuracy and (2) better quality and efficiency than proposal generating methods or normalized cut loss . Alternating schemes (proposal generation) give higher loss at convergence. Besides for weak supervision, we also report our preliminary results for full and semi-supervision with unlabeled images.
and corresponding efficient gradient computation is left for future work. Also it would be interesting to apply our CRF regularized loss framework for weakly-supervised computer vision problems other than segmentation.
Appendix 0.A Mean-field inference for DenseCRF
For positive semidefinite affinity matrix , e.g. with Gaussian Kernel,
is concave777means up to an additive constant.. Since the cross entropy is linear and the negative entropy is convex w.r.t. , the concave-convex procedure (CCCP) can iteratively solve an approximation of by linearizing the concave part at .
KKT approach for minimizing subject to probability simplex constraints yields the following optima,
where is a normalization constant for softmax. (13) is exactly the mean-field update for dense CRF . Note that the updates (13) is also justified in a similar way in  for convergent optimization of KL distance between factorial marginal distribution and Gibbs distribution induced by CRF. Our justification of (13) is different. We show alternative interpretation of mean-field updates (13) as minimizing CRF potential plus negative entropy .
Weston, J., Ratle, F., Mobahi, H., Collobert, R.:
Deep learning via semi-supervised embedding.
In: Neural Networks: Tricks of the Trade.Springer (2012) 639–655
-  Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT press (2016)
Tang, M., Djelouah, A., Perazzi, F., Boykov, Y., Schroers, C.:
Normalized Cut Loss for Weakly-supervised CNN Segmentation.
In: IEEE conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City (June 2018)
-  Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (2000) 888–905
-  Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE transactions on Pattern Analysis and Machine Intelligence 23(11) (November 2001) 1222–1239
-  Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images. In: ICCV. Volume I. (July 2001) 105–112
-  Rother, C., Kolmogorov, V., Blake, A.: Grabcut - interactive foreground extraction using iterated graph cuts. In: ACM trans. on Graphics (SIGGRAPH). (2004)
-  Krahenbuhl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: NIPS. (2011)
Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D.,
Huang, C., Torr, P.H.:
Conditional random fields as recurrent neural networks.In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1529–1537
-  Tang, M., Marin, D., Ayed, I.B., Boykov, Y.: Normalized Cut meets MRF. In: European Conference on Computer Vision (ECCV), Amsterdam, Netherlands (October 2016)
-  Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv:1606.00915 (2016)
-  Arnab, A., Zheng, S., Jayasumana, S., Romera-Paredes, B., Larsson, M., Kirillov, A., Savchynskyy, B., Rother, C., Kahl, F., Torr, P.: Conditional random fields meet deep neural networks for semantic segmentation. IEEE Signal Processing Magazine (2017)
-  Kolesnikov, A., Lampert, C.H.: Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In: European Conference on Computer Vision (ECCV), Springer (2016)
Papandreou, G., Chen, L.C., Murphy, K.P., Yuille, A.L.:
Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation.In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1742–1750
Rajchl, M., Lee, M.C., Oktay, O., Kamnitsas, K., Passerat-Palmbach, J., Bai,
W., Damodaram, M., Rutherford, M.A., Hajnal, J.V., Kainz, B., et al.:
Deepcut: Object segmentation from bounding box annotations using convolutional neural networks.IEEE transactions on medical imaging 36(2) (2017) 674–683
-  Adams, A., Baek, J., Davis, M.A.: Fast high-dimensional filtering using the permutohedral lattice. Computer Graphics Forum 29(2) (2010) 753–762
-  Lin, D., Dai, J., Jia, J., He, K., Sun, J.: Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 3159–3167
-  Chambolle, A., Darbon, J.: On total variation minimization and surface evolution using parametric maximum flows. International Journal of Computer Vision 84(3) (April 2009) 288
-  Chan, T.F., Esedoglu, S., Nikolova, M.: Algorithms for finding global minimizers of image segmentation and denoising models. SIAM journal on applied mathematics 66(5) (2006) 1632–1648
-  Pock, T., Chambolle, A., Cremers, D., Bischof, H.: A convex relaxation approach for computing minimal partitions. In: IEEE conference on Computer Vision and Pattern Recognition (CVPR). (2009)
-  Couprie, C., Grady, L., Najman, L., Talbot, H.: A unifying graph-based optimization framework. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(7) (July 2011) 1384–1399
-  Desmaison, A., Bunel, R., Kohli, P., Torr, P.H., Kumar, M.P.: Efficient continuous relaxations for dense crf. In: European Conference on Computer Vision, Springer (2016) 818–833
Thalaiyasingam, A., Desmaison, A., Bunel, R., Salzmann, M., Torr, P.H., Kumar,
Efficient linear programming for dense crfs.In: Conference on Computer Vision and Pattern Recognition. (2017)
Krähenbühl, P., Koltun, V.:
Parameter learning and convergent inference for dense random fields.
In: International Conference on Machine Learning (ICML). (2013)
-  Vernaza, P., Chandraker, M.: Learning random-walk label propagation for weakly-supervised semantic segmentation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Volume 3. (2017)
-  Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B.: Simple does it: Weakly supervised instance and semantic segmentation. In: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI, USA (2017)
-  Dai, J., He, K., Sun, J.: Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 1635–1643
-  Grady, L.: Random walks for image segmentation. IEEE transactions on pattern analysis and machine intelligence 28(11) (2006) 1768–1783
-  Chapelle, O., Schölkopf, B., Zien, A., eds.: Semi-Supervised Learning. MIT Press, Cambridge, MA (2006)
-  Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning 3(1) (2011) 1–122
-  Baque, P., Bagautdinov, T.M., Fleuret, F., Fua, P.: Principled parallel mean-field inference for discrete random fields. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (2016)
-  Paris, S., Durand, F.: A fast approximation of the bilateral filter using a signal processing approach. International journal of computer vision 81(1) (2009) 24–52
-  Cheng, M.M., Mitra, N.J., Huang, X., Torr, P.H.S., Hu, S.M.: Global contrast based salient region detection. IEEE TPAMI 37(3) (2015) 569–582
-  Hung, W.C., Tsai, Y.H., Liou, Y.T., Lin, Y.Y., Yang, M.H.: Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:1802.07934 (2018)