CGNet: A Light-weight Context Guided Network for Semantic Segmentation

11/20/2018 ∙ by Tianyi Wu, et al. ∙ Institute of Computing Technology, Chinese Academy of Sciences 8

The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state-of-the-art networks have enormous amount of parameters hence unsuitable for mobile devices, while other small memory footprint models ignore the inherent characteristic of semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet), which is a light-weight network for semantic segmentation on mobile devices. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context, and further improves the joint feature with the global context. Based on the CG block, we develop Context Guided Network (CGNet), which captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy. CGNet is also elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, CGNet achieves 64.8 IoU on Cityscapes with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 × 1024 high-resolution images. The source code for the complete system are publicly available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 8

Code Repositories

CGNet

CGNet: A Light-weight Context Guided Network for Semantic Segmentation


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recent interest in autonomous driving and robotic systems has a strong demand for deploying semantic segmentation models on mobile devices. It is significant and challenging to design a model with both small memory footprint and high accuracy. Fig. 2 shows the accuracy and the number of parameters of different frameworks on Cityscapes [10] dataset. High-accuracy methods, marked as blue points in Fig. 2, are transferred from deep image classification networks and have a huge amount of parameters, e.g. DFN [32] of 44.8 M, DeepLabv3+ [9] of 54.6 M and DenseASPP [30] of 28.6 M. Therefore, most of these high-accuracy methods are unfit for being deployed on mobile devices. There are some models with small memory footprint, marked as red points in Fig. 2. Unfortunately, these small footprint methods get low segmentation accuracy, because they only follow the design principle of image classification but ignore the inherent property of semantic segmentation. To address the above issue, we propose a light-weight network specially tailored for semantic segmentation, named as Context Guided Network (CGNet).

Figure 1: Accuracy vs. the number of parameters on Cityscapes [10] 222The methods involved are Dilation8 [33], DeepLabv2 [6], SQNet [28], ENet [23], PSPNet [39], RefineNet [19], FRRN [24], FCN-8s [26], SegNet [1], ESPNet [21], ERFNet [25], ICNet [38], DenseASPP [30], DeepLabv3+ [9], DFN [32], BiSeNet [31] and the proposed CGNet.. Blue points: high-accuracy methods. Red points: methods with small memory footprint. Compared with methods of small memory footprint, the proposed CGNet locates in the left-top since it has lower number of parameters while achieve a higher accuracy.

In order to improve the accuracy, we design a novel CGNet to exploit the inherent property of semantic segmentation. Spatial dependency and contextual information play an important role to improve accuracy, since semantic segmentation involves both pixel-level categorization and object localization. Thus, we present Context Guided (CG) block, which is the basic unit of CGNet, to model the spatial dependency and the semantic contextual information effectively and efficiently. Firstly, CG block learns the joint feature of both local feature and surrounding context. Thus, CG block learns the representation of each object from both itself and its spatially related objects, which contains rich co-occurrence relationship. Secondly, CG block employs the global context to improve the joint feature. The global context is applied to channel-wisely re-weight the joint feature, so as to emphasize useful components and suppress useless ones. Thirdly, the CG block is utilized in all stages of CGNet, from bottom to top. Thus, CGNet captures contextual information from both the semantic level (from deep layers) and the spatial level (from shallow layers), which is more fit for semantic segmentation compared with existing methods. Existing segmentation frameworks can be divided into two types: (1) Some methods named FCN-shape models follow the design principle of image classification and ignore the contextual information, e.g. ESPNet [21], ENet [31] and FCN [26], as shown in Fig. 2 (a). (2) Other methods named FCN-CM models only capture contextual information from the semantic level by performing context module after the encoding stage, e.g. DPC [5], DenseASPP [30], DFN [32] and PSPNet [39], as shown in Fig. 2 (b). In contrast, the structure of capturing context feature in all stages are more effective and efficient, as shown in Fig. 2 (c). Fourthly, current mainstream segmentation networks have five down-sampling stages which learn too abstract features of objects and missing lots of the discriminative spatial information, causing over-smoothed segmentation boundaries. Differently, CGNet has only three down-sampling stages, which is helpful for preserving spatial information.

Additionally, CGNet is elaborately designed to reduce the number of parameters. Firstly, it follows the principle of “deep and thin” to save memory footprint as much as possible. CGNet only contains 51 layers, and the number of channels in the three stages is 32, 64, 128, respectively. Compared with frameworks [5, 32, 39, 30] transferred from ResNet [12] and DenseNet [15]

which contain hundreds of layers and thousands of channel numbers, CGNet is a light-weighted neural network. Secondly, to further reduce the number of parameters and save memory footprint, CG block adopts channel-wise convolutions, which removes the computational cost across channels. Finally, experiments on Cityscapes

[10] and CamVid [3] verify the effectiveness and efficiency of the proposed CGNet. Without any pre-processing, post-processing, or complex upsampling, our model achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and can process an image with 2048 1024 resolution at a speed of 50 fps on only one Tesla K80 card. We will release the code soon.

Figure 2: Alternative architectures for semantic segmentation. CM: context modules, CF: context features. (a) FCN-shape models follow the design principle of image classification and ignore the contextual information. (b) FCN-CM models only capture contextual information from the semantic level by performing context module after the encoding stage. (c) The proposed CGNet captures context features in all stages, from both semantic level and spatial level.

Our main contributions could be concluded as:

  • We analyze the inherent property of semantic segmentation and propose CG block which learns the joint feature of both local feature and surrounding context and further improves the joint feature with the global context.

  • We design CGNet, which applies CG block to effectively and efficiently capture contextual information in all stages. The backbone of CGNet is particularly tailored for increasing segmentation accuracy.

  • We elaborately design the architecture of CGNet to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks.

Figure 3: (a) It is difficult to categorize the yellow region when we only pay attention to the yellow region itself. (b) It is easier to recognize the yellow region with the help of its surrounding context (red region). (c) Intuitively, we can categorize the yellow region with a higher degree of confidence when we further consider the global contextual information (purple region). (d) The structure of Context Guided block, which consists of local feature extractor , surrounding context extractor , joint feature extractor , and global context extractor . () represents element-wise multiplication.
Figure 4: Structure of Local Residual Learning (LRL) and Global Residual Learning (GRL).

2 Related Work

In this section, we introduce related work on semantic segmentation, including small semantic segmentation models and contextual information models, as well as related work on attention models.

Small semantic segmentation models: Small semantic segmentation models require making a good trade-off on accuracy and model parameters or memory footprint. ENet [23] proposes to discard the last stage of the model and shows that semantic segmentation is feasible on embedded devices. However, ICNet [38] proposes a compressed-PSPNet-based image cascade network to speed up the semantic segmentation. More recent ESPNet [22] introduces a fast and efficient convolutional network for semantic segmentation of high-resolution images under resource constraints. Most of them follow the design principles of image classification, which makes them have poor segmentation accuracy.

Contextual information models: Recent works [7, 11, 32, 34] have shown that contextual information is helpful for models to predict high-quality segmentation results. One direction is to enlarge the receptive field of filter or construct a specific module to capture contextual information. Dilation8 [33] employs multiple dilated convolutional layers after class likelihood maps to exercise multi-scale context aggregation. SAC [37] proposes a scale-adaptive convolution to acquire flexiblesize receptive fields. DeepLab-v3 [7] employs Atrous Spatial Pyramid Pooling [6] to capture useful contextual information with multiple scales. Following this, the work [30] introduces DenseASPP to connect a set of atrous convolutional layers for generating multi-scale features. However, the work [35] proposes a Global-residual Refinement Network through exploiting global contextual information to predict the parsing residuals. PSPNet [39] introduces four pooling branches to exploit global information from different subregions. By contrast, some other approaches directly construct information propagation model. SPN [20] constructs a row/column linear propagation model to capture dense and global pairwise relationships in an image, and PSANet [40] proposes to learn the adaptively point-wise context by employing bi-directional information propagation. Another direction is to use Conditional Random Fields to model the long-range dependencies. CRFasRNN [41]

reformulates DenseCRF with pairwise potential functions and unrolls the mean-field steps as recurrent neural networks, which composes a uniform framework and can be learned end-to-end. Differently, DeepLab frameworks

[6] use DenseCRF [18] as post-processing. After that, many approaches combine CRFs and DCNNs in a uniform framework, such as combining Gaussian CRFs [4] and specific pairwise potentials [16]. More recently, CCL [11] proposes a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. DPC [5]

proposes to search for efficient multi-scale architectures by using architecture search techniques. Most of these works explore context information in the decoder phase and ignore surrounding context, since they take classification network as the backbone of the segmentation model. In contrast, the proposed approach proposes to learn the joint feature of both local feature and surrounding context feature in the encoder phase, which is more representative for semantic segmentation than the feature extracted by the classification network.

Figure 5: Architecture of the proposed Context Guided Network. “M”and “N” are the number of CG blocks in stage 2 and stage 3 respectively.

Attention models: Recently, attention mechanism has been widely used for increasing model capability. RNNsearch [2] proposes an attention model that softly weighs the importance of input words when predicting a target word for machine translation. Following this, SA [8] proposes an attention mechanism that learns to softly weigh the features from different input scales when predicting the semantic label of a pixel. SENet [14] proposes to recalibrate channel-wise feature responses by explicitly modeling interdependencies between channels for image classification. More recently, NL [29]

proposes to compute the response at a position as a weighted sum of the features at all positions for video classification. In contrast, we introduce the attention mechanism into semantic segmentation. Our proposed CG block uses the global contextual information to compute a weight vector, which is employed to refine the joint feature of both local feature and surrounding context feature.

3 Proposed Approach

In this work, we develop CGNet, a light-weight neural network for semantic segmentation on mobile devices. In this section, we first elaborate the important component CG block. Then we demonstrate the architecture of CGNet. Finally, we compare CG block with similar units.

3.1 Context Guided Block

The CG block is inspired by the human visual system, which depends on contextual information to understand the scene. As shown in Fig. 3 (a), suppose the human visual system tries to recognize the yellow region, which is difficult if we only pay attention to this region itself. In Fig. 3 (b), we define the red region as the surrounding context of the yellow region. If both the yellow region and its surrounding context are obtained, it is easier to assign the category to the yellow region. Therefore, the surrounding context is helpful for semantic segmentation. For Fig. 3 (c), if the human visual system further captures the global context of the whole scene (purple region) along with the yellow region and its surrounding context (red region), it has a higher degree of confidence to categorize the yellow region. Therefore, both surrounding context and global context are helpful for improving the segmentation accuracy.

Name Type Channel Output size
3

3 Conv (stride=2)

32 340 340
stage 1 33 Conv (stride=1) 32 340 340
33 Conv (stride=1) 32 340 340
stage 2 CG block (r=2) M 64 170 170
stage 3 CG block (r=4) N 128 85 85
11 Conv(stride=1) 19 85 85
Table 1: The CGNet architecture for Cityscapes. Input size is 3 680 680. “Conv” represents the operators of Conv-BN-PReLU. “r” is the rate of Atrous/dilated convolution in surrounding context extractor . “M” and “N” are the number of CG blocks in stage 2 and stage 3 respectively.

Based on the above analysis, we introduce CG block to take full advantage of local feature, surrounding context and global context. CG block consists of a local feature extractor , a surrounding context extractor , a joint feature extractor , and a global context extractor , as shown in Fig. 3 (d). CG block contains two main steps. In the first step, and is employed to learn local feature and the corresponding surrounding context respectively. is instantiated as 3 3 standard convolutional layer to learn the local feature from the 8 neighboring feature vectors, corresponding to the yellow region in Fig. 3 (a). Meanwhile, is instantiated as a 3 3 atrous/dilated convolutional layer since atrous/dilated convolution has a relatively large receptive field to learn the surrounding context efficiently, corresponding to the red region in Fig. 3 (b). Thus obtains the joint feature from the output of and . We simply design

as a concatenation layer followed by the Batch Normalization (BN) and Parametric ReLU (PReLU) operators. In the second step,

extracts global context to improve the joint feature. Inspired by SENet [14], the global context is treated as a weighted vector and is applied to channel-wisely refine the joint feature, so as to emphasize useful components and suppress useless one. In practice, we instantiate as a global average pooling layer to aggregate the global context corresponding to the purple region in Fig. 3

(c), followed by a multilayer perceptron to further extract the global context. Finally, we employ a scale layer to re-weight the joint feature with the extracted global context. Note that the refining operation of

is adaptive for the input image since the extracted global context is generated from the input image.

Furthermore, the proposed CG block employs residual learning [12]

which helps to learn highly complex features and to improve gradient back-propagation during training. There are two types of residual connection in the proposed CG block. One is local residual learning (LRL), which connects input and the joint feature extractor

. The other is global residual learning (GRL), which bridges input and the global feature extractor . Fig. 4 (a) and (b) show these two cases, respectively. Intuitively, GRL has a stronger capability than LRL to promote the flow of information in the network.

Method Mean IoU (%)
CGNet_M3N15 No 54.6
CGNet_M3N15 Single 55.4
CGNet_M3N15 Full 59.7
Table 2: Evaluation results of surrounding context extractor on Cityscapes validation set. Here we set M=3, N=15.
Method Mean IoU (%)
CGNet_M3N15 w/o 58.9
CGNet_M3N15 w 59.7
Table 3: Evaluation results of global context extractor on Cityscapes validation set. Here we set M=3, N=15.

3.2 Context Guided Network

Based on the proposed CG block, we elaborately design the structure of CGNet to reduce the number of parameters, as shown in Fig. 5. CGNet follows the major principle of “deep and thin” to save memory footprint as much as possible. Different from frameworks transferred from deep image classification networks which contain hundreds of layers and thousands of channel numbers, CGNet only consists of 51 convolutional layers with small channel numbers. In order to better preserve the discriminative spatial information, CGNet has only three down-sampling stages and obtains 1/8 feature map resolution, which is much different from mainstream segmentation networks with five down-sampling stages and 1/32 feature map resolution. The detailed architecture of our proposed CGNet is presented in Tab. 1. In stage 1, we stack only three standard convolutional layers to obtain the feature map of 1/2 resolution, while in stage 2 and 3, we stack and CG blocks to downsample the feature map to 1/4 and 1/8 of the input image respectively. For stage 2 and 3, the input of their first layer are gained from combining the first and last blocks of their previous stages, which encourages feature reuse and strengthen feature propagation. In order to improve the flow of information in CGNet, we take the input injection mechanism which additionally feeds 1/4 and 1/8 downsampled input image to stage 2 and stage 3 respectively. Finally, a 1 1 convolutional layer is employed to produce the segmentation prediction.

Note that CG block is employed in all units of stage 2 and 3, which means CG block is utilized almost in all the stages of CGNet. Therefore, CGNet has the capability of aggregating contextual information from bottom to top, in both semantic level from deep layers and spatial level from shallow layers. Compared with existing segmentation frameworks which ignore the contextual information or only capture contextual information from the semantic level by performing context module after the encoding stage, the structure of CGNet is elaborately tailored for semantic segmentation to improve the accuracy.

Furthermore, in order to further reduce the number of parameters, feature extractor and employ channel-wise convolutions, which remove the computational cost across channels and save much memory footprint. The previous work [13] employs 1 1 convolutional layer followed channel-wise convolutions for promoting the flow of information between channels. However, this design is not suitable for the proposed CG block, since the local feature and the surrounding context in CG block need to maintain channel independence. Additional experiments also verify this observation.

Method Input Injection Mean IoU (%)
CGNet_M3N15 w/o 59.4
CGNet_M3N15 w 59.7
Table 4: The effectiveness of Input Injection mechanism. Here we set M=3, N=15.
Method Activation Mean IoU (%)
CGNet_M3N15 ReLU 58.1
CGNet_M3N15 PReLU 59.7
Table 5: The effectiveness of ReLU and PReLU. Here we set M=3, N=15.

3.3 Comparision with Similar Works

ENet unit [23] employs a main convolutional layer to extract single-scale feature, which results in lacking of local features in the deeper layers of the network and lacking surrounding context at the shallow layers of the network. MobileNet unit [13] employs a depth-wise separable convolution that factorizes standard convolutions into depth-wise convolutions and point-wise convolutions. Our proposed CG block can be treated as the generalization of MobileNet unit. When and , our proposed CG block will degenerate to MobileNet unit. ESP unit [21] employs K parallel dilated convolutional kernels with different dilation rates to learn multi-scale features. Inception unit [27] is proposed to approximate a sparse structure and process multi-scale visual information for image classification. CCL unit [11] leverages the informative context and spotlights the local information in contrast to the context, which is proposed to learn locally discriminative features form block3, block4 and block5 of ResNet-101, and fuse different scale features through gated sum scheme in the decoder phase. In contrast to them, the CG block is proposed to learn the joint feature of both local feature and surrounding context feature in the encoder phase.

4 Experiments

In this section, we evaluate the proposed CGNet on Cityscapes [10] and CamVid[3]. Firstly, we introduce the datasets and the implementation protocol. Then the contributions of each component are investigated in ablation experiments on Cityscapes validation set. Finally, we perform comprehensive experiments on Cityscapes and CamVid benchmarks and compare with the state-of-the-art works to verify the effectiveness of CGNet.

Parameters (M) Mean IoU (%)
3 9 0.34 56.5
3 12 0.38 58.1
6 12 0.39 57.9
3 15 0.41 59.7
6 15 0.41 58.4
3 18 0.45 61.1
3 21 0.49 63.5
Table 6: Evaluation results of CGNet with different M and N on Cityscapes validation set. M: the number of CG blocks in stage 2; N: the number of CG blocks in stage 3.

4.1 Experimental Settings

Cityscapes Dataset

The Cityscapes dataset contains 5, 000 images collected in street scenes from 50 different cities. The dataset is divided into three subsets, including 2, 975 images in training set, 500 images in validation set and 1, 525 images in testing set. High-quality pixel-level annotations of 19 semantic classes are provided in this dataset. Segmentation performances are reported using the commonly Intersection-over-Union (IoU).

CamVid Dataset

The CamVid is a road scene dataset from the perspective of a driving automobile. The dataset involves 367 training images, 101 validation images and 233 testing images. The images have a resolution of 480 360. The performance is measured by pixel intersection-over-union (IoU) averaged across the 11 classes.

Implementation protocol

All the experiments are performed on the PyTorch platform. We employ the “poly” learning rate policy, in which we set base learning rate to

and power to . For optimization, we use ADAM [17] with batch size 14, betas=(0.9, 0.999), and weight decay in training. For data augmentation, we employ random mirror, the mean subtraction and random scale on the input images to augment the dataset during training. The random scale contains . The iteration number is set to

for Cityscapes and CamVid. Our loss function is the sum of cross-entropy terms for each spatial position in the output score map, ignoring the unlabeled pixels.

4.2 Ablation Studies


Methods
Residual connections Mean IoU (%)
CGNet_M3N21 LRL 57.2
CGNet_M3N21 GRL 63.5

Table 7: The effectiveness of local residual learning (LRL) and global residual learning (GRL). Here we set M=3, N=21.

Methods
1x1 Conv Mean IoU (%)

CGNet_M3N21 w/ 53.3
CGNet_M3N21 w/o 63.5

Table 8: The effectiveness of Inter-channel interaction. Here we set M=3, N=21.

Ablation Study for Surrounding Context Extractor

We adopt three schemes to evaluate the effectiveness of surrounding context extractor . (1) No: CGNet_M3N15 model does not employ , and is configured with the same number of parameters by increasing the number of channels. (2) Single: the surrounding context extractor is employed only in the last block of the framework. (3) Full: the surrounding context extractor is employed in all blocks of the framework. Results are shown in Tab. 2. It is clear that the second and third schemes can improve the accuracy by and respectively, which shows surrounding context is very beneficial for increasing segmentation accuracy and should be employed in all blocks of the framework.


methods
FLOPS (G) Parameters (M) Memory (M) mean IoU (%)
PSPNet_MS [39] 453.6 65.6 2180.6 78.4
DenseASPP_MS [30] 214.7 28.6 3997.5 80.6
SegNet[1] 286.0 29.5 - 56.1
ENet [23] 3.8 0.4 - 58.3
ESPNet [21] 4.0 0.4 - 60.3
CGNet_M3N21 6.0 0.5 334 64.8

Table 9:

Accuracy, parameter and memory analysis. FLOPS and Memory are estimated for an input of 3

640360. “-” indicates the approaches do not report the corresponding results, “MS” indicates employing multi-scale inputs with average fusion during testing.

Method
mean IoU (%) ms fps
PSPNet 78.4 1000 1
DenseASPP 80.6 624 1.6
SegNet 56.1 506.1 2
ENet 58.3 61.0 16
ESPNet 60.3 20.4 49
ICNet 69.5 100.3 14
BiSeNet (Xception) 71.4 23.8 42
HRFR[36] 74.4 778.6 1.2
BiSeNet (ResNet-18) 77.7 34.5 29
CGNet_M3N21 64.8 20.0 50

Table 10: Inference speed comparison, evaluated on 20481024 high-resolution images. Hardware platform: Tesla K80.

Ablation Study for Global Context Extractor

We use global context to refine the joint feature learned by . As shown in Tab. 3, global context extractor can improve the accuracy from to , which demonstrates that the global context extractor is desirable for the proposed approach.

Ablation Study for the Input Injection Mechanism

We take input injection mechanism that refers to down-sampling the input image to the resolution of stage 2 and stage 3, and injecting them into the corresponding stage. As shown in Tab. 4, this mechanism can improve the accuracy from to . Intuitively, this performance improvement comes from Input Injection mechanism which increases the flow of information on the network.

Ablation Study for Activation Function

We compare ReLU and PReLU in CGNet_M3N15, as shown in Tab. 5. Using PReLU can improve performance from to

. Therefore, we choose PReLU as the activation function of the proposed model.

Ablation Study for Network Depth

We train the proposed CGNet with different block nums at each stage and shows the trade-offs between the accuracy and the number of parameters, as shown in Tab. 6. In general, deep networks perform better than shallow ones at the expense of increased computational cost and model size. From Tab. 6, we can find that segmentation accuracy does not increase as M increases when fixing N. For example, we fix and change M from 3 to 6, the mean IoU drops by 0.2 points. So we set (the number of CG blocks in stage 2) for CGNet. Furthermore, we compromise between accuracy and model size by setting different N (the number of CG blocks in stage 3). Our approach achieves the highest mean IoU of 63.5% on Cityscapes validation set when M=3, N=21.

Ablation Study for Residual Learning

Inspired by [12], residual learning is employed in CG block to further improve the information flow. From Tab. 7, compared with LRL, we can find that GRL can improve the accuracy from to . One possible reason is that the GRL has a stronger ability to promote the flow of information in the network, so we choose GRL in the proposed CG block.

Ablation Study for Inter-channel Interaction

Previous work [13] employs a 11 convolution followed by channel-wise convolution to improve the flow of information between channels and promote inter-channel interaction. Here, We try the 11 convolution in CG block but find it damage the segmentation accuracy. As shown in Tab. 8, we can improve the accuracy from to by removing the 11 convolutions. In other words, this interaction mechanism in our CG block hampers the accuracy of our models severely. One possible reason is that the local feature and the surrounding context feature need to maintain channel independence.

4.3 Comparison with state-of-the-arts

Memory analysis

Tab. 9 reports a comparison of FLOPS (floating point operations), memory footprint and parameters of different models. The efficiency of CGNet_M3N21 is evident compared to current smallest semantic segmentation model. The number of parameters of CGNet_M3N21 is close to ENet [23], and our method is 6.5% higher than it in mean IoU. Furthermore, the accuracy of our approach is 4.5% higher than the very recent model ESPNet [21]. With such a few parameters and memory footprint, CGNet is very suitable to be deployed in mobile devices. Furthermore, compared with deep and state-of-the-art semantic segmentation networks, CGNet_M3N21 is 131 and 57 times smaller than PSPNet [39] and DenseASPP [30], while its category-wise accuracy is only 5.4% and 5.5% less respectively.

Method Pretrain Parameters (M) mIoU cat (%) mIoU cla (%)
SegNet [1] ImageNet 29.5 79.1 56.1
FCN-8s [26] ImageNet 134.5 85.7 65.3
ICNet [38] ImageNet 7.8 - 69.5

DeepLab-v2+CRF [6]
ImageNet 44.04 86.4 70.4

BiSeNet_MS (Xception) [31]
ImageNet 145.0 - 71.4
BiSeNet_MS (ResNet-18) [31] ImageNet 27.0 - 77.7
PSPNet _MS [39] ImageNet 65.7 90.6 78.4
DFN_MS [32] ImageNet 44.8 - 79.3
DenseASPP_MS [30] ImageNet 28.6 90.7 80.6
ENet [23] From scratch 0.4 80.4 58.3
ESPNet [21] From scratch 0.4 82.2 60.3
FRRN [24] From scratch 17.7 - 63

CGNet_M3N21
From scratch 0.5 85.7 64.8


Table 11: Accuracy comparison of our method against other small or high-accuracy semantic segmentation methods on Cityscapes test set, only training with the fine set. ‘Pretrain” refers to the models that have been pretrained using external data like ImageNet, “MS” indicates employing multi-scale inputs during testing, and “-” indicates that the approaches do not report the corresponding results.
Method Parameters (M) mean IoU (%)
SegNet 29.5 55.6
ENet 0.4 51.3
BiSeNet (Xception) 145.0 65.6
BiSeNet (ResNet-18) 27.0 68.7

CGNet_M3N21
0.5 65.6

Table 12: Accuracy comparison of our method against other semantic segmentation methods on Camvid test set.

Speed analysis

We report inference speed of our proposed CGNet_M3N21 on Cityscapes test set and compare with other state-of-the-art methods in Tab. 10. For fairness, we reimplement these methods on Tesla K80, since some of them do not report their running time, many of them may have adopted very time-consuming multi-scale testing for high accuracy, and some of them reported their speeds on different hardware platforms. As shown in Tab. 10, current high-accuracy model PSPNet [39] and DenseASPP [30] takes more than 0.5 second to predict segmentation result on Tesla K80 GPU card during testing. In contrast, our method achieves 50 fps with negligible precision sacrifice. Furthermore, compared with the previous small memory footprint model, e.g. SegNet, ENet, ICNet, BiSeNet, our method improves significantly in terms of inference speed. Note that the speed of our approach is higher than the very recent model ESPNet [21], moreover, the accuracy of our approach is 4.5% higher than it. For inference time, our model can run at 50 fps on 20481024 high-resolution images using only one Tesla K80 GPU card.

Accuracy analysis

We report the evaluation results of the proposed CGNet_M3N21 on Cityscapes test set and compare to other state-of-the-art methods in Tab. 11. Without any pre-processing, post-processing, or any modules (such as ASPP [6], PPM [39]), our CGNet_M3N21 achieves in terms of mean IoU (only training on fine annotated images). Note that we do not employ any testing tricks, like multi-scale or complex upsampling. We list the number of model parameters and the segmentation accuracy in Tab. 11. As shown in Tab. 11, compared the methods that do not require pretraining on ImageNet, our CGNet_M3N21 achieves a relatively large accuracy gain. For example, the mean IoU of proposed CGNet_M3N21 is about 6.5% higher than ENet [23] with almost no increase of the model parameters. Besides, it is even quantitatively better than the methods that are pretrained on ImageNet without consideration of memory footprint and speed, such as SegNet [1], and the model parameters of CGNet_M3N21 is about 60 times smaller than it. We visualize some segmentation results on the validation set of Cityscapes in Fig. 6. Tab. 12 shows the accuracy result of the proposed CGNet_M3N21 on CamVid dataset. We use the training set and validation set to train our model. Here, we use 480360 resolution for training and evaluation. The number of parameters of CGNet_M3N21 is close to the current smallest semantic segmentation model ENet [23], and the accuracy of our method is 14.3% higher than it.

Figure 6: Result illustration of CGNet on Cityscapes validation set. From left to right: Input image, prediction of CGNet_M3N21 without surrounding context feature extractor , prediction of CGNet_M3N21 without global context feature extractor , prediction of CGNet_M3N21 and ground-truth.

5 Conclusions

In this paper, we rethink semantic segmentation from its characteristic which involves image recognition and object localization. Furthermore, we propose a novel Context Guided block for learning the joint feature of both local feature and the surrounding context. Based on Context Guided block, we develop a light-weight Context Guided Network for semantic segmentation, and our model allows very memory-efficient inference, which significantly enhances the practicality of semantic segmentation in real-world scenarios. Furthermore, our approach achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and runs at 50 fps on 2048 1024 high-resolution images.

References