Feature Pyramid Encoding Network for Real-time Semantic Segmentation

09/18/2019
by   Mengyu Liu, et al.
23

Although current deep learning methods have achieved impressive results for semantic segmentation, they incur high computational costs and have a huge number of parameters. For real-time applications, inference speed and memory usage are two important factors. To address the challenge, we propose a lightweight feature pyramid encoding network (FPENet) to make a good trade-off between accuracy and speed. Specifically, we use a feature pyramid encoding block to encode multi-scale contextual features with depthwise dilated convolutions in all stages of the encoder. A mutual embedding upsample module is introduced in the decoder to aggregate the high-level semantic features and low-level spatial details efficiently. The proposed network outperforms existing real-time methods with fewer parameters and improved inference speed on the Cityscapes and CamVid benchmark datasets. Specifically, FPENet achieves 68.0% mean IoU on the Cityscapes test set with only 0.4M parameters and 102 FPS speed on an NVIDIA TITAN V GPU.

READ FULL TEXT VIEW PDF

page 1

page 5

page 9

03/22/2021

CFPNet: Channel-wise Feature Pyramid for Real-Time Semantic Segmentation

Real-time semantic segmentation is playing a more important role in comp...
10/21/2020

Dense Dual-Path Network for Real-time Semantic Segmentation

Semantic segmentation has achieved remarkable results with high computat...
03/08/2022

Stage-Aware Feature Alignment Network for Real-Time Semantic Segmentation of Street Scenes

Over the past few years, deep convolutional neural network-based methods...
05/22/2020

Bi-direction Context Propagation Network for Real-time Semantic Segmentation

Spatial details and context correlations are two types of critical infor...
03/08/2022

Deep Multi-Branch Aggregation Network for Real-Time Semantic Segmentation in Street Scenes

Real-time semantic segmentation, which aims to achieve high segmentation...
04/27/2021

Rethinking BiSeNet For Real-time Semantic Segmentation

BiSeNet has been proved to be a popular two-stream network for real-time...
03/23/2019

Residual Pyramid Learning for Single-Shot Semantic Segmentation

Pixel-level semantic segmentation is a challenging task with a huge amou...

1 Introduction

Semantic segmentation has become one of the popular research areas with the recent success of deep convolutional neural networks (CNNs). It aims to assign a particular class to each pixel of an image, and can be applied to many applications from self-driving vehicles to medical image diagnostics. Most of the state-of-the-art semantic segmentation models are based on the fully convolutional network (FCN)

[Long et al.(2015)Long, Shelhamer, and Darrell] to provide end-to-end dense classification in images, and some employ conditional random fields (CRFs) [Krähenbühl and Koltun(2011)] as a post-processing method to refine the boundaries of segmentation results. Most of the high performing methods often have a large number of parameters due to their deep and wide architectures. For example, PSPNet [Zhao et al.(2017)Zhao, Shi, Qi, Wang, and Jia] has 65.7 million parameters and DeepLabV3+ [Chen et al.(2018b)Chen, Zhu, Papandreou, Schroff, and Adam] contains 54.6 million parameters. Besides, these methods require huge computational resources and take a long time to process an image even on modern GPUs. However, reality applications of semantic segmentation usually require real-time inference and low memory footprint.

To address the above problem, several real-time semantic segmentation methods [Zhao et al.(2018)Zhao, Qi, Shen, Shi, and Jia, Paszke et al.(2016)Paszke, Chaurasia, Kim, and Culurciello, Poudel et al.(2018)Poudel, Bonde, Liwicki, and Zach] have been proposed to make a trade-off between accuracy and speed. Some methods take downsampled input images to reduce the computation complexity and fuse features at different levels [Zhao et al.(2018)Zhao, Qi, Shen, Shi, and Jia, Poudel et al.(2018)Poudel, Bonde, Liwicki, and Zach], while others prune redundant channels to reduce the number of parameters [Paszke et al.(2016)Paszke, Chaurasia, Kim, and Culurciello]. These methods have achieved faster inference speed at the cost of lower accuracy on benchmarks [Cordts et al.(2016)Cordts, Omran, Ramos, Rehfeld, Enzweiler, Benenson, Franke, Roth, and Schiele, Everingham et al.(2010)Everingham, Van Gool, Williams, Winn, and Zisserman]

. Features extracted from downsampled images lack spatial details, and pruned shallow networks are weak in encoding contextual information with small receptive fields.

Most of the semantic segmentation models employ the U-shape architecture [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox], which is composed of a deep encoder to extract features and a decoder to fuse the extracted features at different levels for final pixel-level classification. Most of the real-time segmentation models contain light decoders, consisting of few convolutional layers and bilinear upsampling to recover resolution [Wu et al.(2018)Wu, Tang, Zhang, and Zhang, Paszke et al.(2016)Paszke, Chaurasia, Kim, and Culurciello]. These simple decoders reduce number of parameters and increase speed, but the fine information is lost, leading to coarse segmentation, especially at boundaries. Some of the high performing methods employ complicated decoders to fuse high-level features with low-level features [Peng et al.(2017)Peng, Zhang, Yu, Luo, and Sun, Jégou et al.(2017)Jégou, Drozdzal, Vazquez, Romero, and Bengio, Zhang et al.(2018)Zhang, Zhang, Peng, Xue, and Sun], hence spatial information can be preserved to produce fine segmentation in this way. However, these methods have increased computational complexity, leading to low efficiency.

Based on these observations, a feature pyramid encoding network (FPENet) for real-time semantic segmentation is proposed. It is a lightweight U-shape model consisting of an encoder and a decoder. In the encoder, the feature pyramid encoding (FPE) block combines a pyramid of dilated convolutions with depth-separable inverted bottleneck block [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen]. Groups of depthwise dilated convolutions of different rates are employed in the FPE block to perform as a spatial pyramid and reduce computational complexity. Encoding multi-scale features with different sizes of receptive fields has been proven helpful for semantic segmentation [Zhao et al.(2017)Zhao, Shi, Qi, Wang, and Jia, Chen et al.(2018b)Chen, Zhu, Papandreou, Schroff, and Adam, Yang et al.(2018)Yang, Yu, Zhang, Li, and Yang]. Instead of placing the spatial pyramid module at the end of the network, we employ it in each block to model spatial dependency and learn representations from feature maps at different levels. Depth-separable convolutions [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam] are combined with dilation convolutions in the FPE block to reduce the number of parameters and inference time. For the decoder, in order to aggregate features of different levels efficiently, we propose a mutual embedding upsample (MEU) module, which uses global contextual concepts from high-level features to guide low-level features and embeds local spatial information from low-level features into high-level features simultaneously.

In summary, the main contributions are as follows.

(i) A feature pyramid encoding block is proposed to encode multi-scale features and reduce computational complexity with groups of pyramid depthwise dilated convolutions.

(ii) A mutual embedding upsample module is introduced to aggregate the high-level and low-level features.

(iii) Significant improvements are obtained on the Cityscapes [Cordts et al.(2016)Cordts, Omran, Ramos, Rehfeld, Enzweiler, Benenson, Franke, Roth, and Schiele] and CamVid [Brostow et al.(2008)Brostow, Shotton, Fauqueur, and Cipolla] benchmarks, with similar number of parameters but much faster inference speed compared to the existing segmentation methods.

2 Related Work

First we review recent developments in real-time semantic segmentation. Multiple studies have explored the impact of encoding multi-level contextual features with large receptive fields. Finally, we summarize the recent research focused on feature aggregation.

Real-time segmentation algorithms: Real-time segmentation algorithms are required to make a trade-off between accuracy and speed, and these models are expected to be lightweight. In ICNet [Zhao et al.(2018)Zhao, Qi, Shen, Shi, and Jia] and ContextNet [Poudel et al.(2018)Poudel, Bonde, Liwicki, and Zach], multi-scale images were employed as inputs of cascaded networks to extract features. Downsampled images were applied to deep branches while large images were applied to shallow branches in these two models to reduce computational complexity. ENet [Paszke et al.(2016)Paszke, Chaurasia, Kim, and Culurciello] discards the last stage of the network and reduces the number of downsampling times to shrink the model. Mehta et al. proposed the ESPNet [Mehta et al.(2018)Mehta, Rastegari, Caspi, Shapiro, and Hajishirzi], where efficient pyramid modules were utilized to extract multi-scale features. BiSeNet [Yu et al.(2018)Yu, Wang, Peng, Gao, Yu, and Sang] extracts high-level semantic features and low-level spatial information independently with two paths. CGNet [Wu et al.(2018)Wu, Tang, Zhang, and Zhang] learns the joint representations of local features and their surrounding context, and utilizes global context attention to refine the joint features.

Multi-level contextual features: Encoding contextual features at multiple levels helps achieve good results in semantic segmentation due to multiple scales of objects and spatial dependency. Zhao et al. showed that global contextual features were beneficial for semantic segmentation, and proposed the PSPNet [Zhao et al.(2017)Zhao, Shi, Qi, Wang, and Jia], which applied a multi-scale spatial pooling module at the end of the model to exploit multi-level contextual features by pooling operations. In [Chen et al.(2018a)Chen, Papandreou, Kokkinos, Murphy, and Yuille], an atrous spatial pyramid pooling (ASPP) module was proposed to model semantic contextual information. ASPP contains several parallel atrous (dilated) convolutions of different rates, and multi-level contextual features are encoded simultaneously. Yang et al. improved the ASPP module by a DenseASPP block [Yang et al.(2018)Yang, Yu, Zhang, Li, and Yang], where the dilated convolutions were connected in a dense way to generate densely sampled features. In the pyramid attention network (PAN) [Li et al.(2018)Li, Xiong, An, and Wang], the spatial pyramid pooling was combined with attention to generate precise pixel-level attention for high-level contextual features. In Res2Net [Gao et al.(2019)Gao, Cheng, Zhao, Zhang, Yang, and Torr], a group of filters in residual block was replaced with smaller groups of filters to extract contextual information simultaneously.

Feature aggregation: Because of the repeated downsampling layers in CNNs, directly upsampling the final score map to the original resolution would lead to coarse results and loss of fine details. FCN adopts skip connections which combine the coarse and fine predictions to reconstruct dense feature maps. Ronneberger et al. proposed a U-shape network [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox], which was composed of an encoder and a symmetric decoder, and long skip connections were introduced to link these two parts. Peng et al. utilized boundary refinement modules in the decoder to enhance feature aggregation ability [Peng et al.(2017)Peng, Zhang, Yu, Luo, and Sun]. Li et al. proposed a global attention upsample module in the decoder to extract global context of high-level features as guidance to weight low-level feature information [Li et al.(2018)Li, Xiong, An, and Wang]. In [Zhang et al.(2018)Zhang, Zhang, Peng, Xue, and Sun], the effectiveness of feature fusion at different levels was explored, and deeply supervised training and semantic supervision were applied to low-level features to introduce more semantic concept.

3 Methods

We here present the feature pyramid encoding (FPE) block and the mutual embedding upsample (MEU) module in detail. The complete network architecture is then described.

(a) Depth-separable inverted bottleneck block
(b) FPE block
Figure 1: Structures of (a) depth-separable inverted bottleneck block and (b) FPE block. The expansion ratio is 4, and dilation rates in FPE block are 1, 2, 4, 8, respectively. DConv: depthwise convolution. LConv: linear convolution. DDConv: depthwise dilated convolution. c: the number of input channels.

3.1 FPE Block

Many approaches [Chen et al.(2018a)Chen, Papandreou, Kokkinos, Murphy, and Yuille, Zhao et al.(2017)Zhao, Shi, Qi, Wang, and Jia, Li et al.(2018)Li, Xiong, An, and Wang] encode multi-scale features with ASPP or a pyramid pooling module at the end of the model to increase receptive field, while others [Mehta et al.(2018)Mehta, Rastegari, Caspi, Shapiro, and Hajishirzi, Mehta et al.(2019)Mehta, Rastegari, Shapiro, and Hajishirzi, Wu et al.(2018)Wu, Tang, Zhang, and Zhang] adopt parallel dilated convolutions with different rates in each stage of the network to combine local information with surrounding context. Encoding multi-scale features simultaneously can yield better performance of semantic segmentation. We combine dilated convolutions with inverted bottleneck structure to perform pyramid encoding in each block of the network.

The FPE block is based on the depth-separable inverted bottleneck block [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen] and is composed of a expansion convolutional layer, groups of depthwise convolutions and a final

pointwise convolution, and residual connection is employed where the number of input channels is equal to the number of output channels. The number of channels is expanded

times by the first convolution and squeezed times by the final convolution. Depthwise convolution splits the input into ( is the number of input channels) groups, then an independent single-channel convolutional filter is applied to each channel. After this, a pointwise convolution is used to fuse these outputs linearly. The combination of depthwise convolution and pointwise convolution is extremely efficient, as it reduces by around 9 times the computational cost compared to the standard convolution [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam].

Figure 1 shows the differences between the depth-separable inverted bottleneck block and the proposed FPE block. For an input feature map of size where , are the spatial width and height of the feature map, respectively, and is the number of input channels, the FPE block first expands the number of channels from to using convolution. Similar to the Res2Net module [Gao et al.(2019)Gao, Cheng, Zhao, Zhang, Yang, and Torr], the output feature map is split into 4 subsets of channels, denoted by , . And then each subset is processed by a group of depthwise dilated filters . The output of is added to the following subset , and then processed by . The outputs of these parallel branches are concatenated and then fused by the final linear convolution to reduce to channels.

The pyramid encoding mechanism is performed by these four parallel depthwise dilated convolutions, and the dilation rate of is . Dilated convolutions enlarge the size of receptive field by inserting zeros between weights of convolutional kernels without increasing parameters. For a normal bottleneck block, the receptive field is only , while the receptive field of FPE block is up to . Branch processes all the features extracted from the previous branches to enhance information flow, and the number of pixels participate in computation increases with the dilation rate. This structure can be considered as four spatial pyramid encoding modules, where the dilation rate increases one by one, and contextual features are encoded under four scales. The final output of FPE block is a feature map generated by multi-scale features, which carries local and surrounding contextual information.

(a) MEU
(b) SA (c) CA
Figure 2: (a) Structure of MEU module. SA: spatial attention block. CA: channel attention block. (b) Spatial attention block. (c) Channel attention block.
Figure 3: Architecture of FPENet.

3.2 MEU Module

In U-shape models, decoder is designed to aggregate features extracted at different levels to recover the resolution. Many methods [Chen et al.(2018b)Chen, Zhu, Papandreou, Schroff, and Adam, Zhao et al.(2017)Zhao, Shi, Qi, Wang, and Jia, Zhang et al.(2018)Zhang, Zhang, Peng, Xue, and Sun] use bilinear upsampling or several simple convolutions as a naive decoder. These naive decoders only consider high-level semantic concepts and ignore low-level spatial details leading to coarse segmentation. While other approaches [Peng et al.(2017)Peng, Zhang, Yu, Luo, and Sun, Jégou et al.(2017)Jégou, Drozdzal, Vazquez, Romero, and Bengio, Lin et al.(2017)Lin, Milan, Shen, and Reid] adopt complicated modules in decoders to aggregate features from different stages and utilize low-level features to refine boundaries. However, these well-designed decoders are time-consuming.

High-level features contain contextual information while low-level features are rich in spatial details. This makes feature aggregation difficult. Zhang et al. showed that introducing more contextual information into low-level features or embedding more spatial details into high-level features can enhance feature fusion [Zhang et al.(2018)Zhang, Zhang, Peng, Xue, and Sun]. PAN [Li et al.(2018)Li, Xiong, An, and Wang] adopts a global attention upsample module to squeeze high-level context and embeds it into low-level features as a guidance. We consider that low-level features containing rich spatial information can also be embedded into high-level features as a guidance.

The MEU module consists of two attention blocks as depicted in Figure 2. First, two convolutions are performed on the high-level and low-level features, respectively. Next, the high-level features from the channel attention block go through a global average pooling operation, a

convolution and a ReLU operator, and then are multiplied by the low-level features. While in the spatial attention block, low-level features are first squeezed by an average pooling operation along the channel axis, next a

convolution and a ReLU non-linearity are applied to generate a single-channel attention map, which is then multiplied by the upsampled high-level features. Finally, these two weighted features are fused by element-wise addition.

The spatial attention map generated from low-level features corresponds to the importance of each pixel. It focuses on localizing the objects and refining the boundaries with spatial details. While the squeezed channel attention map generated from high-level features reflects the importance of each channel. It focuses on the global context to provide content information. The MEU module extracts these two kinds of attention maps and efficiently embeds semantic concepts and spatial details to low-level and high-level features.

Name Operator Channel Output size
stage1 Conv 16
FPE 16
stage2 FPE 32
stage3 FPE 64
decoder2 MEU 64
decoder1 MEU 32
final Conv
Table 1: Architecture details of FPENet. Input size is . is the expansion ratio of FPE block. is the number of classes.

3.3 Network Architecture

The entire network architecture is shown in Figure 3. Based on the above discussion, we have designed this lightweight encoder-decoder model with FPE blocks and MEU modules. In order to preserve spatial information and reduce number of parameters, the total downsampling rate is 8. The detailed structure of the proposed model is shown in Table 1.

We employ FPE blocks in the encoder except the first layer, and the number of channels in each stage is 16, 32, 64, respectively. In stages 2 and 3, we employ and

FPE blocks respectively, and the stride of

depthwise dilated convolutions is set to 2 in the first blocks to downsample feature maps. All expansion ratios of FPE blocks are set to 4 to perform pyramid encoding except for the first, a normal bottleneck block. We add long skip connections in stages 2 and 3, the inputs of these two stages are combined from the outputs of the first and last blocks of their preceding stages. These skip connections encourage signal propagation and perform as an implicit deep supervision, cause earlier layers to connect to the deepest layer to receive supervision from different stages of the decoder. For the decoder, two MEU modules are used to aggregate features from each stage and recover the resolution step by step. Finally, a

convolutional layer is applied as the pixel-level classifier.

4 Experiments

4.1 Implementation Protocol

We conducted all the experiments using PyTorch

[Paszke et al.(2017)Paszke, Gross, Chintala, Chanan, Yang, DeVito, Lin, Desmaison, Antiga, and Lerer] with CUDA 10.0 and cuDNN back-ends. Adam algorithm [Kingma and Ba(2015)] with batch size 8 and weight decay 0.0001 were used to train the networks from scratch without any pre-training on any large datasets. The “poly” learning rate policy [Chen et al.(2018a)Chen, Papandreou, Kokkinos, Murphy, and Yuille] was employed:

(1)

where

is the current number of epoch,

is 0.9 and initial learning rate was set to 0.0005. We employed the zero-mean normalization, random horizontal flip, random rotation between -10 and 10 degree and random scaling between 0.5 and 1.75 for data augmentation. The networks were trained for 400 epochs on the Cityscapes and 300 epochs on the CamVid. For training and test on the Cityscapes dataset, we downsampled the input images by two and recovered the segmentation results to original resolution using bilinear upsampling. For the CamVid dataset, images were trained and evaluated at the original resolution. Accuracy was measured using the mean Intersection-over-Union (mIoU) metric. The mean of cross-entropy error over all pixels was applied as the loss.

4.2 Ablation Studies

The Cityscapes is an urban street scene dataset for semantic understanding. It contains 5000 fine annotated images, divided into three sets, 2975 for training, 500 for validation and 1525 for test. Furthermore, 20000 coarsely annotated images are provided for training. All images are of resolution, , and all pixels are annotated to 19 classes. In our experiments, only the fine annotated images were used for training the networks. In these ablation studies, we evaluated our networks on the validation set of Cityscapes to investigate the effect of each component in FPENet.

Ablation on pyramid encoding structure: We adopted three schemes to evaluate the effect of the pyramid encoding structure by changing the number of branches in the FPE block to 1, 2 and 4. When the number of branches is 1, the FPE block is equal to the normal bottleneck block. The expansion ratios were the same in these three schemes to keep number of parameters same, and were set to 3 and 7, respectively. Naive bilinear upsampling was employed as the decoder in these schemes. Results are shown in Table 3, showing that the pyramid encoding structure gave better result, and two schemes improved the segmentation quality by 3.5% and 6.6%, respectively. These statistically significant improvements indicate that the pyramid encoding structure is beneficial for segmentation task as multi-scale contextual features are encoded efficiently without introducing new parameters.

Name #Branches mIoU (%) FPE_P3Q7 1 55.9 FPE_P3Q7 2 59.4 FPE_P3Q7 4 62.5
Table 2: Results of FPE encoder with different number of branches.
Name Dilation rates mIoU (%) FPE_P3Q7 1, 2, 3, 4 61.7 FPE_P3Q7 1, 2, 4, 8 62.5
Table 3: Results of FPE encoder with different combinations of dilation rates.

Ablation on dilation rates: We designed two kinds of FPE blocks with different combinations of dilation rates and used them to build the encoder, one with dilation rates of 1, 2, 3, 4, while the another with 1, 2, 4, 8. As shown in Table 3, the model with larger dilation rates in FPE block achieved better result. The range of receptive field of the former FPE block was from to , while the latter to . Larger receptive field can encode more surrounding features and learn better multi-scale representations.

Ablation on addition between branches: In FPE blocks, we added the output of one branch to the input of following branch. As shown in Table 6, the addition operations between adjacent branches improved the accuracy from 62.5% to 63.0%. This improvement comes from the addition operations which change the independent branches to a cascaded pyramid module, so larger dilated convolutions perform on the features extracted by smaller dilated convolutions. The number of pixels convoluted by large kernels is also increased, this structure is similar to the DenseASPP module in [Yang et al.(2018)Yang, Yu, Zhang, Li, and Yang].

#Params FLOPs mIoU (%) 3 5 233K 3.77G 59.5 3 7 305K 4.37G 64.1 5 7 325K 5.04G 64.3 3 9 378K 4.98G 65.5 5 9 398K 5.64G 65.6 3 11 450K 5.58G 65.8
Table 4:

Results of FPENet with different depths, number of parameters and FLOPS are estimated on

input.
Addition Long skip mIoU (%) 62.5 63.0 64.1
Table 5: Results of FPE encoder with different settings. , .
MEU CA SA mIoU (%) w/o 65.5 w 66.5 w 67.2
Table 6: Results of MEU module with different components. , .

Ablation on long skip connection: Long skip connections were employed in stages 2 and 3 in FPENet to combine the outputs of the first and final blocks. Accuracy was improved by 1.1% as shown in Table 6. Intuitively, long skip connections apply implicit supervision to earlier layers and increase flow of information.

Ablation on encoder depth: We used different numbers of blocks in stages 2 and 3 to change the depth of the encoder. The numbers of parameters, FLOPs and accuracies of different configurations are shown in Table 6. We can see that the value of has more impact on accuracy than , indicating that stacking more FPE blocks increases receptive field in stage 3 and achieves better results. However, raising from 9 to 11, the improvement became minor, this may due to that the large receptive field in stage 3 is beyond the size of feature maps, and efficient features can not by extracted. Therefore, to make a trade-off between accuracy and computational complexity, we set to 3 and to 9 in the final architecture.

Ablation on decoder: Since the FPE blocks extract features at different stages, MEU modules were used to aggregate these features to provide dense pixel-level prediction. We first evaluated the MEU module with only channel attention block, then we used channel and spatial attention together in MEU module to test the performance. As shown in Table 6, the channel and spatial attention blocks both improved the accuracy, indicating that embedding semantic concepts into low-level features and spatial details into high-level features with the MEU module lead to better results.

4.3 Cityscapes

Based on the ablation studies, we combined the FPE blocks and MEU modules to build the complete network and experimented it on the Cityscapes dataset. First, we conducted experiments to estimate the inference speed at different resolutions for comparison with other methods. All experiments were conducted on an NVIDIA TITAN V GPU, using PyTorch framework with CUDA 10.0 and cuDNN 7.4, and each network was randomly initialized and evaluated for 100 times. The results and corresponding input sizes are shown in Table 7. Next, we trained FPENet with only fine annotated images of Cityscapes and accuracies on the test set are shown in Table 7. For a fair comparison, we did not employ multi-scale or multi-crop test.

As shown in Table 7, the number of parameters of FPENet is close to the ESPNet, but the accuracy is 7.7% higher at the same input size. FPENet is 14 and 19 times smaller than the BiSeNet1 and ICNet, while the mIoU is only 0.4% and 1.5% less, respectively. Besides, FPENet achieves 102 FPS speed at input resolution, which significantly outperforms most of existing real-time methods. When the input size is , the accuracy is still better than some methods with lower FLOPs. To improve the accuracy, we also used resolution for training and test. Some segmentation results of FPENet with different input resolutions are presented in Fig. 4.

(a) Image
(b) Groundtruth
(c)
(d)
(e)
Figure 4: Visualization results on Cityscapes dataset with , and input resolutions.
Method Input size #Params FLOPs FPS mIoU (%)
ENet[Paszke et al.(2016)Paszke, Chaurasia, Kim, and Culurciello] 0.4M 4.4G 61 58.3
ESPNet[Mehta et al.(2018)Mehta, Rastegari, Caspi, Shapiro, and Hajishirzi] 0.4M 4.7G 132 60.3
ESPNetv2[Mehta et al.(2019)Mehta, Rastegari, Shapiro, and Hajishirzi] 0.7M 3.5G 84 62.1
CGNet[Wu et al.(2018)Wu, Tang, Zhang, and Zhang] 0.5M 28.0G 14 64.8
ContextNet[Poudel et al.(2018)Poudel, Bonde, Liwicki, and Zach] 0.9M 48.3G 24 66.1
BiSeNet1[Yu et al.(2018)Yu, Wang, Peng, Gao, Yu, and Sang] 5.8M 14.8G 79 68.4
ICNet[Zhao et al.(2018)Zhao, Qi, Shen, Shi, and Jia] 7.8M 29.8G 59 69.5
FPENet 0.4M 3.2G 129 62.7
FPENet 0.4M 5.7G 102 68.0
FPENet 0.4M 12.8G 55 70.1
Table 7: Speed and accuracy comparison of FPENet on Cityscapes test set.

4.4 CamVid

The CamVid road scenes dataset has fully labelled images for semantic segmentation: 367 for training, 101 for validation and 233 for test. Each image is of pixels, labelled with 11 semantic classes. We used the training and validation set to train our model and tested on the test set. Results of global accuracy and mIoU are shown in Table 8. Our method outperforms other deep models with fewer parameters.

Method #Params Global avg. (%) mIoU (%)
ENet[Paszke et al.(2016)Paszke, Chaurasia, Kim, and Culurciello] 0.4M 51.3
FCN8 [Long et al.(2015)Long, Shelhamer, and Darrell] 134.5M 83.1 52.0
Bayesian SegNet [Kendall et al.(2015)Kendall, Badrinarayanan, and Cipolla] 29.5M 86.9 63.1
BiSeNet1[Yu et al.(2018)Yu, Wang, Peng, Gao, Yu, and Sang] 5.8M 65.6
FPENet 0.4M 89.6 65.4
Table 8: Results on CamVid test set. “—” indicates that the methods do not report the corresponding results.

5 Conclusions

This paper presents a lightweight architecture, feature pyramid encoding network (FPENet) for semantic segmentation. A feature pyramid encoding (FPE) block is proposed and adopted in every stage of FPENet to encode multi-scale features using a spatial pyramid of depthwise dilated convolutions. Mutual embedding upsample (MEU) modules are employed in the decoder to aggregate features from different stages. The ablation experiments show that FPE blocks significantly improve accuracy due to large receptive field and enhanced information flow, and the MEU modules aggregate deep contextual features and shallow spatial features efficiently. Experimental results on the Cityscapes and CamVid datasets demonstrate superiority of the purposed FPENet over other real-time methods with much faster inference speed and fewer parameters.

References

  • [Brostow et al.(2008)Brostow, Shotton, Fauqueur, and Cipolla] Gabriel J Brostow, Jamie Shotton, Julien Fauqueur, and Roberto Cipolla. Segmentation and recognition using structure from motion point clouds. In

    Proceedings of the European Conference on Computer Vision

    , pages 44–57, 2008.
  • [Chen et al.(2018a)Chen, Papandreou, Kokkinos, Murphy, and Yuille] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4):834–848, 2018a.
  • [Chen et al.(2018b)Chen, Zhu, Papandreou, Schroff, and Adam] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision, pages 801–818, 2018b.
  • [Cordts et al.(2016)Cordts, Omran, Ramos, Rehfeld, Enzweiler, Benenson, Franke, Roth, and Schiele] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.

    The cityscapes dataset for semantic urban scene understanding.

    In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 3213–3223, 2016.
  • [Everingham et al.(2010)Everingham, Van Gool, Williams, Winn, and Zisserman] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, 2010.
  • [Gao et al.(2019)Gao, Cheng, Zhao, Zhang, Yang, and Torr] Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, and Philip Torr. Res2net: A new multi-scale backbone architecture. arXiv preprint arXiv:1904.01169, 2019.
  • [Howard et al.(2017)Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, and Adam] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
  • [Jégou et al.(2017)Jégou, Drozdzal, Vazquez, Romero, and Bengio] Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 11–19, 2017.
  • [Kendall et al.(2015)Kendall, Badrinarayanan, and Cipolla] Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680, 2015.
  • [Kingma and Ba(2015)] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
  • [Krähenbühl and Koltun(2011)] Philipp Krähenbühl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in Neural Information Processing Systems, pages 109–117, 2011.
  • [Li et al.(2018)Li, Xiong, An, and Wang] Hanchao Li, Pengfei Xiong, Jie An, and Lingxue Wang. Pyramid attention network for semantic segmentation. arXiv preprint arXiv:1805.10180, 2018.
  • [Lin et al.(2017)Lin, Milan, Shen, and Reid] Guosheng Lin, Anton Milan, Chunhua Shen, and Ian Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1925–1934, 2017.
  • [Long et al.(2015)Long, Shelhamer, and Darrell] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
  • [Mehta et al.(2018)Mehta, Rastegari, Caspi, Shapiro, and Hajishirzi] Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In Proceedings of the European Conference on Computer Vision, pages 552–568, 2018.
  • [Mehta et al.(2019)Mehta, Rastegari, Shapiro, and Hajishirzi] Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi. Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9190–9200, 2019.
  • [Paszke et al.(2016)Paszke, Chaurasia, Kim, and Culurciello] Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv preprint arXiv:1606.02147, 2016.
  • [Paszke et al.(2017)Paszke, Gross, Chintala, Chanan, Yang, DeVito, Lin, Desmaison, Antiga, and Lerer] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In Advances in Neural Information Processing Systems Workshops, 2017.
  • [Peng et al.(2017)Peng, Zhang, Yu, Luo, and Sun] Chao Peng, Xiangyu Zhang, Gang Yu, Guiming Luo, and Jian Sun. Large kernel matters–improve semantic segmentation by global convolutional network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4353–4361, 2017.
  • [Poudel et al.(2018)Poudel, Bonde, Liwicki, and Zach] Rudra PK Poudel, Ujwal Bonde, Stephan Liwicki, and Christopher Zach. Contextnet: Exploring context and detail for semantic segmentation in real-time. arXiv preprint arXiv:1805.04554, 2018.
  • [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer Assisted Intervention, pages 234–241, 2015.
  • [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018.
  • [Wu et al.(2018)Wu, Tang, Zhang, and Zhang] Tianyi Wu, Sheng Tang, Rui Zhang, and Yongdong Zhang. Cgnet: A light-weight context guided network for semantic segmentation. arXiv preprint arXiv:1811.08201, 2018.
  • [Yang et al.(2018)Yang, Yu, Zhang, Li, and Yang] Maoke Yang, Kun Yu, Chi Zhang, Zhiwei Li, and Kuiyuan Yang. Denseaspp for semantic segmentation in street scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3684–3692, 2018.
  • [Yu et al.(2018)Yu, Wang, Peng, Gao, Yu, and Sang] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision, pages 325–341, 2018.
  • [Zhang et al.(2018)Zhang, Zhang, Peng, Xue, and Sun] Zhenli Zhang, Xiangyu Zhang, Chao Peng, Xiangyang Xue, and Jian Sun. Exfuse: Enhancing feature fusion for semantic segmentation. In Proceedings of the European Conference on Computer Vision, pages 269–284, 2018.
  • [Zhao et al.(2017)Zhao, Shi, Qi, Wang, and Jia] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2881–2890, 2017.
  • [Zhao et al.(2018)Zhao, Qi, Shen, Shi, and Jia] Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia. Icnet for real-time semantic segmentation on high-resolution images. In Proceedings of the European Conference on Computer Vision, pages 405–420, 2018.