A list of papers on Semantic Segmentation.
Semantic segmentation benefits robotics related applications especially autonomous driving. Most of the research on semantic segmentation is only on increasing the accuracy of segmentation models with little attention to computationally efficient solutions. The few work conducted in this direction does not provide principled methods to evaluate the different design choices for segmentation. In this paper, we address this gap by presenting a real-time semantic segmentation benchmarking framework with a decoupled design for feature extraction and decoding methods. The framework is comprised of different network architectures for feature extraction such as VGG16, Resnet18, MobileNet, and ShuffleNet. It is also comprised of multiple meta-architectures for segmentation that define the decoding methodology. These include SkipNet, UNet, and Dilation Frontend. Experimental results are presented on the Cityscapes dataset for urban scenes. The modular design allows novel architectures to emerge, that lead to 143x GFLOPs reduction in comparison to SegNet. This benchmarking framework is publicly available at "https://github.com/MSiam/TFSegmentation".READ FULL TEXT VIEW PDF
A list of papers on Semantic Segmentation.
RTSeg: Real-time Semantic Segmentation Comparative Study
Semantic segmentation has made progress in the recent years with deep learning. The first prominent work in this field was fully convolutional networks(FCNs). FCN was proposed as an end-to-end method to learn pixel-wise classification, where transposed convolution was used for upsampling. Skip architecture was used to refine the segmentation output, that utilized higher resolution feature maps. That method paved the road to subsequent advances in the segmentation accuracy. Multi-scale approaches [2, 3], structured models [4, 5], and spatio-temporal architectures  introduced different directions for improving accuracy. All of the above approaches focused on accuracy and robustness of segmentation. Well known benchmarks and datasets for semantic segmentation such as Pascal , NYU RGBD , Cityscapes , and Mapillary  boosted the competition toward improving accuracy.
However, little attention is given to the computational efficiency of these networks. Although, when it comes to applications such as autonomous driving this would have tremendous impact. There exists few work that tries to address the segmentation networks efficiency such as [11, 12]. The survey on semantic segmentation  presented a comparative study between different segmentation architectures including ENet . Yet, there is no principled comparison of different networks and meta-architectures. These previous studies compared different networks as a whole, without comparing the effect of different modules. That does not enable researchers and practitioners to pick the best suited design choices for the required task.
In this paper we propose the first framework toward benchmarking real-time architectures in segmentation. Our main contributions are: (1) we provide a modular decoupling of the segmentation architecture into feature extraction and decoding method which is termed as meta-architecture as shown in Figure 1. The separation helps in understanding the impact of different parts of the network on real-time performance. (2) A detailed ablation study with highlighting the trade-off between accuracy and computational efficiency is presented. (3) The modular design of our framework allowed the emergence of two novel segmentation architectures using MobileNet  and ShuffleNet 
with multiple decoding methods. ShuffleNet lead to 143x GFLOPs reduction in comparison to SegNet. Our framework is built on top of Tensorflow and is publicly available.
. The meta-architectures for semantic segmentation identify the decoding method for in the network upsampling. All of the network architectures share the same down-sampling factor of 32. The downsampling is achieved either by utilizing pooling layers, or strides in the convolutional layers. This ensures that different meta architectures have a unified down-sampling factor to assess the effect of the decoding method only.
SkipNet architecture denotes a similar architecture to FCN8s . The main idea of the skip architecture is to benefit from feature maps from higher resolution to improve the output segmentation. SkipNet applies transposed convolution on heatmaps in the label space instead of performing it on feature space. This entails a more computationally efficient decoding method than others. Feature extraction networks have the same downsampling factor of 32, so they follow the 8 stride version of skip architecture. Higher resolution feature maps are followed by 1x1 convolution to map from feature space to label space that produces heatmaps corresponding to each class. The final heatmap with downsampling factor of 32 is followed by transposed convolution with stride 2. Elementwise addition between this upsampled heatmaps and the higher resolution heatmaps is performed. Finally, the output heat maps are followed by a transposed convolution for up-sampling with stride 8. Figure 2(a) shows the SkipNet architecture utilizing a MobileMet encoder.
U-Net architecture denotes the method of decoding that up-samples features using transposed convolution corresponding to each downsampling stage. The up-sampled features are fused with the corresponding features maps from the encoder with the same resolution. The stage-wise upsampling provides higher accuracy than one shot 8x upsampling. The current fusion method used in the framework is element-wise addition. Concatenation as a fusion method can provide better accuracy, as it enables the network to learn the weighted fusion of features. Nonetheless, it increases the computational cost, as it is directly affected by the number of channels. The upsampled features are then followed by 1x1 convolution to output the final pixel-wise classification. Figure 2(b) shows the UNet architecture using MobileNet as a feature extraction network.
Dilation Frontend architecture utilizes dilated convolution instead of downsampling the feature maps. Dilated convolution enables the network to maintain an adequate receptive field, but without degrading the resolution from pooling or strided convolution. However, a side-effect of this method is that computational cost increases, since the operations are performed on larger resolution feature maps. The encoder network is modified to incorporate a downsampling factor of 8 instead of 32. The decrease of the downsampling is performed by either removing pooling layers or converting stride 2 convolution to stride 1. The pooling or strided convolutions are then replaced with two dilated convolutions with dilation factor 2 and 4 respectively.
In order to achieve real-time performance multiple network architectures are integrated in the benchmarking framework. The framework includes four state of the art real-time network architectures for feature extraction. These are: (1) VGG16. (2) ResNet18. (3) MobileNet. (4) ShuffleNet . The reason for using VGG16 is to act as a baseline method to compare against as it was used in . The other architectures have been used in real-time systems for detection and classification. ResNet18 incorporates the usage of residual blocks that directs the network toward learning the residual representation on identity mapping.
network architecture is based on depthwise separable convolution. It is considered the extreme case of the inception module, where separate spatial convolution for each channel is applied denoted as depthwise convolutions. Then 1x1 convolution with all the channels to merge the output denoted as pointwise convolutions is used. The separation in depthwise and pointwise convolution improve the computational efficiency on one hand. On the other hand it improves the accuracy as the cross channel and spatial correlations mapping are learned separately.
ShuffleNet encoder is based on grouped convolution that is a generalization of depthwise separable convolution. It uses channel shuffling to ensure the connectivity between input and output channels. This eliminates connectivity restrictions posed by the grouped convolutions.
In this section experimental setup, detailed ablation study and results in comparison to the state of the art are reported.
is utilized to avoid over-fitting. The feature extractor part of the network is initialized with the pre-trained corresponding encoder trained on Imagenet. A width multiplier of 1 for MobileNet to include all the feature channels is performed through all the experiments. The number of groups used in ShuffleNet is 3. Based on previous results on classification and detection three groups provided adequate accuracy.
Results are reported on Cityscapes dataset  which contains 5000 images with fine annotation, with 20 classes including the ignored class. Another section of the dataset contains coarse annotations with 20,000 labeled images. These are used in the case of Coarse pre-training that improves the results of the segmentation. Experiments are conducted on images with resolution of 512x1024.
|Model||GFLOPs||Class IoU||Class iIoU||Category IoU||Category iIoU|
Semantic segmentation is evaluated using mean intersection over union (mIoU), per-class IoU, and per-category IoU. Table1 shows the results for the ablation study on different encoders-decoders with mIoU and GFLOPs to demonstrate the accuracy and computations trade-off. The main insight gained from our experiments is that, UNet decoding method provides more accurate segmentation results than Dilation Frontend. This is mainly due to the transposed convolution by 8x in the end of the Dilation Frontend, unlike the UNet stage-wise upsampling method. The SkipNet architecture provides on par results with UNet decoding method. In some architectures such as SkipNet-ShuffleNet it is less accurate than UNet counter part by 1.5%.
The UNet method of incrementally upsampling with-in the network provides the best in terms of accuracy. However, Table 2 clearly shows that SkipNet architecture is more computationally efficient with 4x reduction in GFLOPs. This is explained by the fact that transposed convolutions in UNet are applied in the feature space unlike in SkipNet that are applied in label space. Table 1 shows that Coarse pre-training improves the overall mIoU with 1-4%. The underrepresented classes are the ones that often benefit from pre-training.
Experimental results on the cityscapes test set are shown in Table 3. Although, DeepLab provides best results in terms of accuracy, it is not computationally efficient. ENet  is compared to SkipNet-ShuffleNet and SkipNet-MobileNet in terms of accuracy and computational cost. SkipNet-ShuffleNet outperforms ENet in terms of GFLOPs, yet it maintains on par mIoU. Both SkipNet-ShuffleNet and SkipNet-MobileNet outperform SegNet  in terms of computational cost and accuracy with reduction up to 143x in GFLOPs. Figure 3 shows qualitative results for different encoders including MobileNet, ShuffleNet and ResNet18. It shows that MobileNet provides more accurate segmentation results than the later two. SkipNet-MobileNet is able to correctly segment the pedestrian and the signs on the right unlike the others.
In this paper we present the first principled approach for benchmarking real-time segmentation networks. The decoupled design of the framework separates modules for better quantitative comparison. The first module is comprised of the feature extraction network architecture, the second is the meta-architecture that provides the decoding method. Three different meta-architectures are included in our framework, including Skip architecture, UNet, and Dilation Frontend. Different network architectures for feature extraction are included, which are ShuffleNet, MobileNet, VGG16, and ResNet-18. Our benchmarking framework provides researchers and practitioners with a mean to evaluate design choices for their tasks.
“Conditional random fields as recurrent neural networks,”in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1529–1537.
“The cityscapes dataset for semantic urban scene understanding,”in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3213–3223.
International Conference on Machine Learning, 2015, pp. 448–456.