Tensorflow implementation : U-net and FCN with global convolution
One of recent trends [30, 31, 14] in network architec- ture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more ef- ficient than a large kernel, given the same computational complexity. However, in the field of semantic segmenta- tion, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the clas- sification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the ob- ject boundaries. Our approach achieves state-of-art perfor- mance on two public benchmarks and significantly outper- forms previous results, 82.2 on PASCAL VOC 2012 dataset and 76.9READ FULL TEXT VIEW PDF
Most existing semantic segmentation methods employ atrous convolution to...
Low-shot learning methods for image classification support learning from...
Most existing semantic segmentation methods employ atrous convolution to...
Dilated convolutions are widely used in deep semantic segmentation model...
Feature upsampling is a key operation in a number of modern convolutiona...
This paper focuses on channel pruning for semantic segmentation networks...
Effective integration of local and global contextual information is cruc...
Tensorflow implementation : U-net and FCN with global convolution
Semantic segmentation can be considered as a per-pixel classification problem. There are two challenges in this task: 1) classification: an object associated to a specific semantic concept should be marked correctly; 2) localization: the classification label for a pixel must be aligned to the appropriate coordinates in output score map. A well-designed segmentation model should deal with the two issues simultaneously.
However, these two tasks are naturally contradictory. For the classification task, the models are required to be invariant to various transformations like translation and rotation. But for the localization task, models should be transformation-sensitive, i.e., precisely locate every pixel for each semantic category. The conventional semantic segmentation algorithms mainly target for the localization issue, as shown in Figure 1 B. But this might decrease the classification performance.
In this paper, we propose an improved net architecture, called Global Convolutional Network (GCN), to deal with the above two challenges simultaneously. We follow two design principles: 1) from the localization view, the model structure should be fully convolutional to retain the localization performance and no fully-connected or global pooling layers should be used as these layers will discard the localization information; 2) from the classification view, large kernel size should be adopted in the network architecture to enable densely connections between feature maps and per-pixel classifiers, which enhances the capability to handle different transformations. These two principles lead to our GCN, as in Figure2 A. The FCN -like structure is employed as our basic framework and our GCN is used to generate semantic score maps. To make global convolution practical, we adopt symmetric, separable large filters to reduce the model parameters and computation cost. To further improve the localization ability near the object boundaries, we introduce boundary refinement block to model the boundary alignment as a residual structure, shown in Figure 2 C. Unlike the CRF-like post-process , our boundary refinement block is integrated into the network and trained end-to-end.
Our contributions are summarized as follows: 1) we propose Global Convolutional Network for semantic segmentation which explicitly address the “classification” and “localization” problems simultaneously; 2) a Boundary Refinement block is introduced which can further improve the localization performance near the object boundaries; 3) we achieve state-of-art results on two standard benchmarks, with 82.2% on PASCAL VOC 2012 and 76.9% on the Cityscapes.
In this section we quickly review the literatures on semantic segmentation. One of the most popular CNN based work is the Fully Convolutional Network (FCN) . By converting the fully-connected layers into convolutional layers and concatenating the intermediate score maps, FCN has outperformed a lot of traditional methods on semantic segmentation. Following the structure of FCN, there are several works trying to improve the semantic segmentation task based on the following three aspects.
Context Embedding in semantic segmentation is a hot topic. Among the first, Zoom-out  proposes a hand-crafted hierarchical context features, while ParseNet  adds a global pooling branch to extract context information. Further, Dilated-Net  appends several layers after the score map to embed the multi-scale context, and Deeplab-V2  uses the Atrous Spatial Pyramid Pooling, which is a combination of convolutions, to embed the context directly from feature map.
Resolution Enlarging is another research direction in semantic segmentation. Initially, FCN  proposes the deconvolution (i.e. inverse of convolution) operation to increase the resolution of small score map. Further, Deconv-Net  and SegNet  introduce the unpooling operation (i.e. inverse of pooling) and a glass-like network to learn the upsampling process. More recently, LRR  argues that upsampling a feature map is better than score map. Instead of learning the upsampling process, Deeplab  and Dilated-Net  propose a special dilated convolution to directly increase the spatial size of small feature maps, resulting in a larger score map.
Boundary Alignment tries to refine the predictions near the object boundaries. Among the many methods, Conditional Random Field (CRF) is often employed here because of its good mathematical formation. Deeplab  directly employs denseCRF , which is a CRF-variant built on fully-connected graph, as a post-processing method after CNN. Then CRFAsRNN  models the denseCRF into a RNN-style operator and proposes an end-to-end pipeline, yet it involves too much CPU computation on Permutohedral Lattice . DPN  makes a different approximation on denseCRF and put the whole pipeline completely on GPU. Furthermore, Adelaide  deeply incorporates CRF and CNN where hand-crafted potentials is replaced by convolutions and nonlinearities. Besides, there are also some alternatives to CRF.  presents a similar model to CRF, called Bilateral Solver, yet achieves 10x speed and comparable performance.  introduces the bilateral filter to learn the specific pairwise potentials within CNN.
In contrary to previous works, we argues that semantic segmentation is a classification task on large feature map and our Global Convolutional Network could simultaneously fulfill the demands of classification and localization.
In this section, we first propose a novel Global Convolutional Network (GCN) to address the contradictory aspects — classification and localization in semantic segmentation. Then using GCN we design a fully-convolutional framework for semantic segmentation task.
The task of semantic segmentation, or pixel-wise classification, requires to output a score map assigning each pixel from the input image with semantic label. As mentioned in Introduction section, this task implies two challenges: classification and localization. However, we find that the requirements of classification and localization problems are naturally contradictory: (1) For classification task, models are required invariant to transformation on the inputs — objects may be shifted, rotated or rescaled but the classification results are expected to be unchanged. (2) While for localization task, models should be transformation-sensitive because the localization results depend on the positions of inputs.
In deep learning, the differences between classification and localization lead to different styles of models. For classification, most modern frameworks such as AlexNet, VGG Net , GoogleNet [31, 32] or ResNet  employ the ”Cone-shaped” networks shown in Figure 1 A: features are extracted from a relatively small hidden layer, which is coarse on spatial dimensions, and classifiers are densely connected to entire feature map via fully-connected layer [20, 30] or global pooling layer [31, 32, 14], which makes features robust to locally disturbances and allows classifiers to handle different types of input transformations. For localization, in contrast, we need relatively large feature maps to encode more spatial information. That is why most semantic segmentation frameworks, such as FCN [25, 29], DeepLab [6, 7], Deconv-Net , adopt ”Barrel-shaped” networks shown in Figure 1 B. Techniques such as Deconvolution , Unpooling [27, 3] and Dilated-Convolution [6, 36] are used to generate high-resolution feature maps, then classifiers are connected locally to each spatial location on the feature map to generate pixel-wise semantic labels.
We notice that current state-of-the-art semantic segmentation models [25, 6, 27] mainly follow the design principles for localization, however, which may be suboptimal for classification. As classifiers are connected locally rather than globally to the feature map, it is difficult for classifiers to handle different variations of transformations on the input. For example, consider the situations in Figure 3: a classifier is aligned to the center of an input object, so it is expected to give the semantic label for the object. At first, the valid receptive filed (VRF)111Feature maps from modern networks such as GoolgeNet or ResNet usually have very large receptive field because of the deep architecture. However, studies  show that network tends to gather information mainly from a much smaller region in the receptive field, which is called valid receptive field (VRF) in this paper. is large enough to hold the entire object. However, if the input object is resized to a large scale, then VRF can only cover a part of the object, which may be harmful for classification. It will be even worse if larger feature maps are used, because the gap between classification and localization becomes larger.
Based on above observation, we try to design a new architecture to overcome the drawbacks. First from the localization view, the structure must be fully-convolutional without any fully-connected layer or global pooling layer that used by many classification networks, since the latter will discard localization information. Second from the classification view, motivated by the densely-connected structure of classification models, the kernel size of the convolutional structure should be as large as possible. Specially, if the kernel size increases to the spatial size of feature map (named global convolution), the network will share the same benefit with pure classification models. Based on these two principles, we propose a novel Global Convolutional Network (GCN) in Figure 2 B. Instead of directly using larger kernel or global convolution, our GCN module employs a combination of and convolutions, which enables densely connections within a large region in the feature map. Different from the separable kernels used by , we do not use any nonlinearity after convolution layers. Compared with the trivial convolution, our GCN structure involves only computation cost and number of parameters, which is more practical for large kernel sizes.
Our overall segmentation model are shown in Figure 2. We use pretrained ResNet  as the feature network and FCN4 [25, 35] as the segmentation framework. Multi-scale feature maps are extracted from different stages in the feature network. Global Convolutional Network structures are used to generate multi-scale semantic score maps for each class. Similar to [25, 35], score maps of lower resolution will be upsampled with a deconvolution layer, then added up with higher ones to generate new score maps. The final semantic score map will be generated after the last upsampling, which is used to output the prediction results.
We evaluate our approach on the standard benchmark PASCAL VOC 2012 [11, 10] and Cityscapes . PASCAL VOC 2012 has 1464 images for training, 1449 images for validation and 1456 images for testing, which belongs to 20 object classes along with one background class. We also use the Semantic Boundaries Dataset  as auxiliary dataset, resulting in 10,582 images for training. We choose the state-of-the-art network ResNet 152 
(pretrained on ImageNet) as our base model for fine tuning. During the training time, we use standard SGD 
with batch size 1, momentum 0.99 and weight decay 0.0005 . Data augmentations like mean subtraction and horizontal flip are also applied in training. The performance is measured by standard mean intersection-over-union (IoU). All the experiments are running with Caffe tool.
In the next subsections, first we will perform a series of ablation experiments to evaluate the effectiveness of our approach. Then we will report the full results on PASCAL VOC 2012 and Cityscapes.
In this subsection, we will make apple-to-apple comparisons to evaluate our approaches proposed in Section 3
. As mentioned above, we use PASCAL VOC 2012 validation set for the evaluation. For all succeeding experiments, we pad each input image intoso that the top-most feature map is .
In Section 3.1 we propose Global Convolutional Network (GCN) to enable densely connections between classifiers and features. The key idea of GCN is to use large kernels, whose size is controlled by the parameter (see Figure 2 B). To verify this intuition, we enumerate different and test the performance respectively. The overall network architecture is shown as in Figure 2 A except that Boundary Refinement block is not applied. For better comparison, a naive baseline is added just to replace GCN with a simple convolution (shown in Figure 4 B). The results are presented in Table 1.
We try different kernel sizes ranging from 3 to 15. Note that only odd size are used just to avoid alignment error. In the case, which roughly equals to the feature map size (), the structure becomes “really global convolutional”. From the results, we can find that the performance consistently increases with the kernel size . Especially, the “global convolutional” version () surpasses the smallest one by a significant margin . Results show that large kernel brings great benefit in our GCN structure, which is consistent with our analysis in Section 3.1.
Further Discussion: In the experiments in Table 1, since there are other differences between baseline and different versions of GCN, it seems not so confirmed to attribute the improvements to large kernels or GCN. For example, one may argue that the extra parameters brought by larger lead to the performance gain. Or someone may think to use another simple structure instead of GCN to achieve large equivalent kernel size. So we will give more evidences for better understanding.
(1) Are more parameters helpful? In GCN, the number of parameters increases linearity with kernel size , so one natural hypothesis is that the improvements in Table 1 are mainly brought by the increased number of parameters. To address this, we compare our GCN with the trivial large kernel design with a trivial convolution shown in Figure 4 C. Results are shown in Table 2. From the results we can see that for any given kernel size, the trivial convolution design contains more parameters than GCN. However, the latter is consistently better than the former in performance respectively.
|# of Params (GCN)||260K||434K||608K||782K|
|# of Params (Conv)||387K||1075K||2107K||3484K|
It is also clear that for trivial convolution version, larger kernel will result in better performance if , yet for the performance drops. One hypothesis is that too many parameters make the training suffer from overfit, which weakens the benefits from larger kernels. However, in training we find trivial large kernels in fact make the network difficult to converge, while our GCN structure will not suffer from this drawback. Thus the actual reason still needs further study.
(2) GCN vs. Stack of small convolutions. Instead of GCN, another trivial approach to form a large kernel is to use stack of small kernel convolutions(for example, stack of kernels in Figure 4 D), , which is very common in modern CNN architectures such as VGG-net . For example, we can use two convolutions to approximate a kernel. In Table 3, we compare GCN with convolutional stacks under different equivalent kernel sizes. Different from , we do not apply nonlinearity within convolutional stacks so as to keep consistent with GCN structure. Results shows that GCN still outperforms trivial convolution stacks for any large kernel sizes.
For large kernel size (e.g. ) convolutional stack will bring much more parameters than GCN, which may have side effects on the results. So we try to reduce the number of intermediate feature maps for convolutional stack and make further comparison. Results are listed in Table 4. It is clear that its performance suffers from degradation with fewer parameters. In conclusion, GCN is a better structure compared with trivial convolutional stacks.
|# of Params||75885K||28505K||4307K||608K|
(3) How GCN contributes to the segmentation results? In Section 3.1, we claim that GCN improves the classification capability of segmentation model by introducing densely connections to the feature map, which is helpful to handle large variations of transformations. Based on this, we can infer that pixels lying in the center of large objects may benefit more from GCN because it is very close to “pure” classification problem. As for the boundary pixels of objects, however, the performance is mainly affected by the localization ability.
To verify our inference, we divide the segmentation score map into two parts: a) boundary region, whose pixels locate close to objects’ boundary (distance ), and b) internal region as other pixels. We evaluate our segmentation model (GCN with ) in both regions. Results are shown in Table 5. We find that our GCN model mainly improves the accuracy in internal region while the effect in boundary region is minor, which strongly supports our argument. Furthermore, in Table 5 we also evaluate the boundary refinement (BF) block referred in Section 3.2. In contrary to GCN structure, BF mainly improves the accuracy in boundary region, which also confirms its effectiveness.
|Model||Boundary (acc.)||Internal (acc. )||Overall (IoU)|
|GCN + BR||73.4||95.1||74.7|
In the above subsection our segmentation models are finetuned from ResNet-152 network. Since large kernel plays a critical role in segmentation tasks, it is nature to apply the idea of GCN also on the pretrained model. Thus we propose a new ResNet-GCN structure, as shown in Figure 5
. We remove the first two layers in the original bottleneck structure used by ResNet, and replace them with a GCN module. In order to keep consistent with the original, we also apply Batch Normalization
and ReLU after each of the convolution layers.
We compare our ResNet-GCN structure with the original ResNet model. For fair comparison, sizes for ResNet-GCN are carefully selected so that both network have similar computation cost and number of parameters. More details are provided in the appendix. We first pretrain ResNet-GCN on ImageNet 2015  and fine tune on PASCAL VOC 2012 segmentation dataset. Results are shown in Table 6. Note that we take ResNet50 model (with or without GCN) for comparison because the training of large ResNet152 is very costly. From the results we can see that our GCN-based ResNet is slightly poorer than original ResNet as an ImageNet classification model. However, after finetuning on segmentation dataset ResNet-GCN model outperforms original ResNet significantly by 5.5%. With the application of GCN and boundary refinement, the gain of GCN-based pretrained model becomes minor but still prevails. We can safely conclude that GCN mainly helps to improve segmentation performance, no matter in pretrained model or segmentation-specific structures.
|ImageNet cls err (%)||7.7||7.9|
|Seg. Score (Baseline)||65.7||71.2|
|Seg. Score (GCN + BR)||72.3||72.5|
In this section we discuss our practice on PASCAL VOC 2012 dataset. Following [6, 37, 24, 7], we employ the Microsoft COCO dataset  to pre-train our model. COCO has 80 classes and here we only retain the images including the same 20 classes in PASCAL VOC 2012. The training phase is split into three stages: (1) In Stage-1, we mix up all the images from COCO, SBD and standard PASCAL VOC 2012, resulting in 109,892 images for training. (2) During the Stage-2, we use the SBD and standard PASCAL VOC 2012 images, the same as Section 4.1. (3) For Stage-3, we only use the standard PASCAL VOC 2012 dataset. The input image is padded to in Stage-1 and for Stage-2 and Stage-3. The evaluation on validation set is shown in Table 7.
|Phase||Baseline||GCN||GCN + BR|
Our GCN + BR model clearly prevails, meanwhile the post-processing multi-scale and denseCRF  also bring benefits. Some visual comparisons are given in Figure 6. We also submit our best model to the on-line evaluation server, obtaining 82.2% on PASCAL VOC 2012 test set, as shown in Table 8. Our work has outperformed all the previous state-of-the-arts.
|CentraleSupelec Deep G-CRF||80.2|
Cityscapes  is a dataset collected for semantic segmentation on urban street scenes. It contains 24998 images from 50 cities with different conditions, which belongs to 30 classes without background class. For some reasons, only 19 out of 30 classes are evaluated on leaderboard. The images are split into two set according to their labeling quality. 5,000 of them are fine annotated while the other 19,998 are coarse annotated. The 5,000 fine annotated images are further grouped into 2975 training images, 500 validation images and 1525 testing images.
The images in Cityscapes have a fixed size of , which is too large to our network architecture. Therefore we randomly crop the images into during training phase. We also increase of GCN from 15 to 25 as the final feature map is . The training phase is split into two stages: (1) In Stage-1, we mix up the coarse annotated images and the training set, resulting in 22,973 images. (2) For Stage-2, we only finetune the network on training set. During the evaluation phase, we split the images into four crops and fuse their score maps. The results are given in Table 9.
|Phase||GCN + BR|
|FCN 8s ||65.3|
|Scale invariant CNN + CRF ||66.3|
We submit our best model to the on-line evaluation server, obtaining 76.9% on Cityscapes test set as shown in Table 10. Once again, we outperforms all the previous publications and reaches the new state-of-art.
According to our analysis on classification and segmentation, we find that large kernels is crucial to relieve the contradiction between classification and localization. Following the principle of large-size kernels, we propose the Global Convolutional Network. The ablation experiments show that our proposed structures meet a good trade-off between valid receptive field and the number of parameters, while achieves good performance. To further refine the object boundaries, we present a novel Boundary Refinement block. Qualitatively, our Global Convolutional Network mainly improve the internal regions while Boundary Refinement increase performance near boundaries. Our best model achieves state-of-the-art on two public benchmarks: PASCAL VOC 2012 (82.2%) and Cityscapes (76.9%).
Higher order conditional random fields in deep neural networks.In
European Conference on Computer Vision, pages 524–540. Springer, 2016.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
Proceedings of The 32nd International Conference on Machine Learning, pages 448–456, 2015.
Imagenet classification with deep convolutional neural networks.In Advances in neural information processing systems, pages 1097–1105, 2012.
Conditional random fields as recurrent neural networks.In Proceedings of the IEEE International Conference on Computer Vision, pages 1529–1537, 2015.
, 64, stride 2
|res-2||max pool, stride 2|
|ImageNet Classifier||global average pool, 1000-d fc, softmax|