Semantic segmentation is the task of performing per-pixel annotation with a discrete set of semantic class labels. Current state of the art methods in segmentation use deep neural networks(Zhao et al., 2017; Shelhamer et al., 2015) which rely on massive per-pixel labeled datasets (Zhou et al., 2017; Everingham et al., 2009; Cordts et al., 2016)
. The labels for these datasets are finely annotated and are generally very expensive to collect. In this paper, we consider the scarce data paradigm, a setting where annotating images with fine details is extremely expensive. On the other hand, coarse or noisy segmentation annotations are substantially cheaper to collect, for example, through use of automated heuristics.
Recently, Veit et al. (Veit et al., 2017)
introduced a semi-supervised framework that jointly learns from clean and noisy labels to more accurately classify images. They show that the paradigm of first training a network on noisy labels and then fine-tuning on the fine labels is non-optimal. In this work, we propose a related approach for semantic segmentation, in which the goal is to learn jointly from both coarse and fine segmentation masks to provide better image segmentations. The proposed model is able to take the input image together with a coarse mask to produce a detailed annotation, Figure1. In particular, we model the segmentation network as conditionally dependent on the coarse masks, based on the intuition that the coarse masks provide a good starting point for the segmentation task.
We use the Cityscapes dataset to evaluate our method. This segmentation dataset has the important characteristic of providing both fine and coarse annotations. Fine annotations are limited to only 5000 images, whereas coarse annotations are available for these 5000 images and for an additional 20000 images. To simulate a setting of scarce data, we limit our training set to a small number of finely annotated images. We primarily focus on the scarce data setting, which we define as having less than or equal to finely annotated images (together with their corresponding coarse masks).
This paper’s main contributions are as follows. We present a method for segmentation in the scarce data setting. We utilize the coarse and fine masks jointly and produce better segmentation results than would be obtained when training solely with fine data or using the coarse masks alone. Our approach is validated in a scarce data setting on the Cityscapes dataset, with average gains of 15.52% mIoU111Averages are taken over finely annotated dataset sizes of 10, 25, 50, 100, 200. over the corresponding baseline PSPNet and average gains of 5.28% over using the coarse mask directly as predictions.
2 Related Work
By adapting Convolutional Neural Networks for semantic segmentation, substantial progress has been made in many segmentation benchmarks(Badrinarayanan et al., 2017; Ronneberger et al., 2015; Shelhamer et al., 2015; Chen et al., 2018; Zhao et al., 2017). Semi-supervised semantic segmentation is an active field of research given that collection of large datasets of finely annotated data is expensive (Pinheiro et al., 2016; Garcia-Garcia et al., 2017; Papandreou et al., 2015). In (Pinheiro et al., 2016), the authors propose a refinement module to merge rich low level features with high-level object features learned in the upper layers of a CNN. Another method for semi-supervised segmentation is (Papandreou et al., 2015)
, which uses Expectation-Maximization (EM) methods for training DCNN models.
Providing low level information (such as the coarse masks in our method) in the form of an embedding layer is explored in previous works (Veit et al., 2017; Perez et al., 2017). In (Veit et al., 2017)
, the noisy labels are jointly embedded with the visual features extracted from an Inception-V3(Szegedy et al., 2016) ConvNet. We explore an analogous approach, which concatenates the coarse masks with the convolution blocks at different network locations in the scarce data semantic segmentation domain.
Our goal is to leverage the small set of fine annotations and their coarse counterparts to learn a mapping from the coarse to fine annotation space. Both the coarse and fine segmentation labels consist of 20 class labels (one being the void label). Formally, we have a small dataset of triplets, each of which consists of fine annotation , coarse annotation and image . We thus have . Each annotation is a mask, with pixels that can take 20 values (each value representing a different class). Our goal is to design an effective approach to leverage the quality of the labels in to map from the coarse labels in to improved segmentation masks. The mapping is conditioned on the input image. We coin the name “detailer” for an architecture that achieves this goal.
3.1 Architecture of Detailer
We utilize a PSPNet for our classifier baseline with training details as described in the original paper. We propose a detailer architecture that is a modified PSPNet, depicted in Figure 2. For each triplet in , we forward through the network and insert an embedding of the coarse annotation, , into the network. The embedding is inserted either before the PPM module (before-pool), after the PPM module (after-pool) or after the final convolution block (after-final). In Section 5, Results, we use the after-final embedding222This decision is justified with an ablation study. See Appendix..
To embed the coarse mask, we expand the 2D annotation image of size to a tensor, where and are the height and width of the mask and each channel is a binaryindex, where is the class of the pixel. A 1x1 convolution is applied on this mask tensor to produce its embedding. The number of filters used for this function is a hyper-parameter, with the default value being 800 filters333This decision is made via an ablation study. See Appendix.. This embedding is concatenated with the visual feature tensor at the configured network location.
Another key detail of the detailer network is an identity-skip connection that adds the coarse annotation labels from the training set to the output of the cleaning module. The skip connection is inspired by the approach from (He et al., 2016). Due to this connection, the network only needs to learn the difference between the fine and the coarse annotations labels, thus simplifying the learning task. For the image, the coarse mask, and the predicted image annotation tensor, , both take the shape . The first denotes a one-hot encoding tensor described earlier and the second represents the corrections for the predicted class values. These two tensors are added together for the final detailed prediction.
4 Experimental Setting
Our experiments are done with simulated scarce subsets of the Cityscapes dataset, using mIoU as our evaluation metric.
We evaluate our proposed architecture on the Urban Landscape Cityscapes Dataset (Cordts et al., 2016). The dataset is suited for our task as it contains a coarse annotation along with a fine annotation for 5000 images. The coarse mask is made with an objective to annotate as many pixels as possible within 7 minutes. This was achieved by labeling coarse polygons under the sole constraint that each polygon must only include pixels belonging to a single class. These coarse masks have very low false positives, 97% of all labeled pixels in the coarse annotations were assigned the same class as the fine annotations. In comparison, the annotation process of each of the fine masks takes more than 1.5 hours on average.
The mIoU of the coarse masks compared against the fine masks for the validation set is . Because our setting uses the coarse masks at test time, an mIoU of can be achieved by directly outputting the coarse mask. Thus, our detailer result should exceed this value at test time.
4.2 Training Details
We use data augmentation as follows - random sizing (between 0.5 and 2.0), random cropping, random rotation (between -10 and +10 degrees) and random horizontal flip. Resnet-101 pretrained on ImageNet was used to initialize the initial layers of the PSPNet. Pixel values are normalized using the ImageNet mean and standard deviation. To train the networks we use cross-entropy loss and SGD with an initial learning rate 0.01, polynomial learning rate decay of 0.9 every iteration, and a momentum of 0.99. We stopped training after 10K mini-batches (batch size 8) for datasets sizes of 200 and under, and after 90k mini-batches for dataset sizes above 200 (to allow training to fully converge).
Our approach shows that for dataset sizes that are very small, it is still possible to learn a segmentation network with relatively high performance. We show that our detailer outperforms both the course mask and the regular classifier for small dataset sizes. Surprisingly, we also notice that although higher resolution images typically work much better than low resolution images when a lot of segmentation data is available, the same does not hold true in the scarce data setting for the detailer (elaboration in “Higher Resolution” section).
When utilizing a very small dataset of fine and coarse annotations to train, our detailer network performs reasonably well on the Cityscapes validation set, as shown in Table 1. For every size, the detailer outperforms the classifier. The average gain is 15.52% mIoU over the classifier performance. Starting from only a dataset of 10 images up to 200 images, we also outperform the coarse mask by an average of 5.28% (the coarse mask mIoU is 47.81%). Qualitative results are shown in Figure 3.
|Dataset size||Detailer mIoU||Classifier mIoU|
Composite Prediction. A reasonable concern is that our improvement over the baseline classifier model shown in Table 1 is solely due to the fact that we incorporate the coarse mask (which is already at 47.81%). To confirm that our coarse injection model (Figure 2) improves results, we define the notion of a composite model. A composite model takes the coarse mask and adds the predictions of the trained segmentation model to it for all pixels that were not assigned a label by the coarse mask (pixels which had ’ignore’). From Table 3 we see that if we take the trained detailer and classifier and make them composite, the detailer outperforms the classifier, indicating that the injection model approach does in fact improve performance.
Higher Resolution. As opposed to using a scarce data low-resolution setting, it is also presumable that a slightly more expensive setting of scarce data with high-resolution could occur in the real-world. To simulate this, we use the same approach as before but instead of re-sizing the Cityscapes data to , we re-size to . If we compare the detailer results at both resolutions (Table 3), we see that for most dataset sizes, at low resolution the detailer does better than at high resolution. This is an important result as it shows that in the scarce data setting while using a coarse mask, using low resolution annotations is preferable to high resolution annotations. Our results indicate that one can save resources by providing coarse and fine annotations for a low resolution instead of providing annotations at a high resolution of .
|Dataset Size||Detailer (higher res)||Detailer (lower res)|
Distillation Our detailer predicts detailed masks using coarse masks as input. These detailed masks can be used to distill the semantic information to a student network, thereby removing our need for coarse masks at test time. We observe that training a PSPNet network using just 100 detailed masks444These detailed masks are obtained from our detailer trained on a scarce dataset of 100 images. gives 37.85% mIoU, exceeding the 34.61% mIoU obtained when training a PSPNet with the corresponding 100 coarse masks. This result shows that the detailing method captures important semantic information which can be transfered to a student network.
We propose a method of leveraging coarse annotations to improve performance of deep neural networks for semantic segmentation in the scarce data setting. Our results show that in the case of scarce data, significant improvement can be attained with the addition of coarse annotations. We observe an increase in segmentation performance of 15.52% mIoU on average when comparing the proposed detailer to a vanilla PSPNet. Additionally, we observe that although models trained on higher resolution images tend to perform better given sufficient training data, in the setting of scarce training data, low resolution images lead to better performance.
- Badrinarayanan et al. (2017) Badrinarayanan, Vijay, Kendall, Alex, and Cipolla, Roberto. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:2481–2495, 2017.
- Chen et al. (2018) Chen, Liang-Chieh, Papandreou, George, Kokkinos, Iasonas, Murphy, Kevin, and Yuille, Alan L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40:834–848, 2018.
Cordts et al. (2016)
Cordts, Marius, Omran, Mohamed, Ramos, Sebastian, Rehfeld, Timo, Enzweiler,
Markus, Benenson, Rodrigo, Franke, Uwe, Roth, Stefan, and Schiele, Bernt.
The cityscapes dataset for semantic urban scene understanding.
- Everingham et al. (2009) Everingham, Mark, Gool, Luc Van, Williams, Christopher K. I., Winn, John M., and Zisserman, Andrew. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88:303–338, 2009.
- Garcia-Garcia et al. (2017) Garcia-Garcia, Alberto, Orts, Sergio, Oprea, Sergiu, Villena-Martinez, Victor, and Rodríguez, José García. A review on deep learning techniques applied to semantic segmentation. CoRR, abs/1704.06857, 2017.
- He et al. (2016) He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
Papandreou et al. (2015)
Papandreou, George, Chen, Liang-Chieh, Murphy, Kevin P., and Yuille, Alan L.
Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation.2015 IEEE International Conference on Computer Vision (ICCV), pp. 1742–1750, 2015.
- Perez et al. (2017) Perez, Ethan, de Vries, Harm, Strub, Florian, Dumoulin, Vincent, and Courville, Aaron C. Learning visual reasoning without strong priors. CoRR, abs/1707.03017, 2017.
- Pinheiro et al. (2016) Pinheiro, Pedro H. O., Lin, Tsung-Yi, Collobert, Ronan, and Dollar, Piotr. Learning to refine object segments. In ECCV, 2016.
- Ronneberger et al. (2015) Ronneberger, Olaf, Fischer, Philipp, and Brox, Thomas. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
- Shelhamer et al. (2015) Shelhamer, Evan, Long, Jonathan, and Darrell, Trevor. Fully convolutional networks for semantic segmentation. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440, 2015.
- Szegedy et al. (2016) Szegedy, Christian, Vanhoucke, Vincent, Ioffe, Sergey, Shlens, Jonathon, and Wojna, Zbigniew. Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826, 2016.
- Veit et al. (2017) Veit, Andreas, Alldrin, Neil, Chechik, Gal, Krasin, Ivan, Gupta, Abhinav, and Belongie, Serge J. Learning from noisy large-scale datasets with minimal supervision. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6575–6583, 2017.
- Zhao et al. (2017) Zhao, Hengshuang, Shi, Jianping, Qi, Xiaojuan, Wang, Xiaogang, and Jia, Jiaya. Pyramid scene parsing network. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6230–6239, 2017.
- Zhou et al. (2017) Zhou, Bolei, Zhao, Hang, Puig, Xavier, Fidler, Sanja, Barriuso, Adela, and Torralba, Antonio. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
We perform ablation studies to justify our architectural decisions for the detailer. The After-Final location is selected for coarse embedding injection based on the results of the ablation study shown in Table 4. We choose 800 filters for the coarse embedding in the detailer network based on the ablation results in Table 5.
|Coarse Location Insertion||mIoU|