CGNet: A Light-weight Context Guided Network for Semantic Segmentation
The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state-of-the-art networks have enormous amount of parameters hence unsuitable for mobile devices, while other small memory footprint models ignore the inherent characteristic of semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet), which is a light-weight network for semantic segmentation on mobile devices. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context, and further improves the joint feature with the global context. Based on the CG block, we develop Context Guided Network (CGNet), which captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy. CGNet is also elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, CGNet achieves 64.8 IoU on Cityscapes with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 × 1024 high-resolution images. The source code for the complete system are publicly available.READ FULL TEXT VIEW PDF
The recent surge of automation in the retail industries has rapidly incr...
With the increasing demand of autonomous machines, pixel-wise semantic
Improving the efficiency of portrait segmentation is of great importance...
Recent work has made significant progress in improving spatial resolutio...
Semantic segmentation is a critical technology for autonomous vehicles t...
The sky is a major component of the appearance of a photograph, and its ...
Exploring contextual information in convolution neural networks (CNNs) h...
CGNet: A Light-weight Context Guided Network for Semantic Segmentation
Recent interest in autonomous driving and robotic systems has a strong demand for deploying semantic segmentation models on mobile devices. It is significant and challenging to design a model with both small memory footprint and high accuracy. Fig. 2 shows the accuracy and the number of parameters of different frameworks on Cityscapes  dataset. High-accuracy methods, marked as blue points in Fig. 2, are transferred from deep image classification networks and have a huge amount of parameters, e.g. DFN  of 44.8 M, DeepLabv3+  of 54.6 M and DenseASPP  of 28.6 M. Therefore, most of these high-accuracy methods are unfit for being deployed on mobile devices. There are some models with small memory footprint, marked as red points in Fig. 2. Unfortunately, these small footprint methods get low segmentation accuracy, because they only follow the design principle of image classification but ignore the inherent property of semantic segmentation. To address the above issue, we propose a light-weight network specially tailored for semantic segmentation, named as Context Guided Network (CGNet).
In order to improve the accuracy, we design a novel CGNet to exploit the inherent property of semantic segmentation. Spatial dependency and contextual information play an important role to improve accuracy, since semantic segmentation involves both pixel-level categorization and object localization. Thus, we present Context Guided (CG) block, which is the basic unit of CGNet, to model the spatial dependency and the semantic contextual information effectively and efficiently. Firstly, CG block learns the joint feature of both local feature and surrounding context. Thus, CG block learns the representation of each object from both itself and its spatially related objects, which contains rich co-occurrence relationship. Secondly, CG block employs the global context to improve the joint feature. The global context is applied to channel-wisely re-weight the joint feature, so as to emphasize useful components and suppress useless ones. Thirdly, the CG block is utilized in all stages of CGNet, from bottom to top. Thus, CGNet captures contextual information from both the semantic level (from deep layers) and the spatial level (from shallow layers), which is more fit for semantic segmentation compared with existing methods. Existing segmentation frameworks can be divided into two types: (1) Some methods named FCN-shape models follow the design principle of image classification and ignore the contextual information, e.g. ESPNet , ENet  and FCN , as shown in Fig. 2 (a). (2) Other methods named FCN-CM models only capture contextual information from the semantic level by performing context module after the encoding stage, e.g. DPC , DenseASPP , DFN  and PSPNet , as shown in Fig. 2 (b). In contrast, the structure of capturing context feature in all stages are more effective and efficient, as shown in Fig. 2 (c). Fourthly, current mainstream segmentation networks have five down-sampling stages which learn too abstract features of objects and missing lots of the discriminative spatial information, causing over-smoothed segmentation boundaries. Differently, CGNet has only three down-sampling stages, which is helpful for preserving spatial information.
Additionally, CGNet is elaborately designed to reduce the number of parameters. Firstly, it follows the principle of “deep and thin” to save memory footprint as much as possible. CGNet only contains 51 layers, and the number of channels in the three stages is 32, 64, 128, respectively. Compared with frameworks [5, 32, 39, 30] transferred from ResNet  and DenseNet 
which contain hundreds of layers and thousands of channel numbers, CGNet is a light-weighted neural network. Secondly, to further reduce the number of parameters and save memory footprint, CG block adopts channel-wise convolutions, which removes the computational cost across channels. Finally, experiments on Cityscapes and CamVid  verify the effectiveness and efficiency of the proposed CGNet. Without any pre-processing, post-processing, or complex upsampling, our model achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and can process an image with 2048 1024 resolution at a speed of 50 fps on only one Tesla K80 card. We will release the code soon.
Our main contributions could be concluded as:
We analyze the inherent property of semantic segmentation and propose CG block which learns the joint feature of both local feature and surrounding context and further improves the joint feature with the global context.
We design CGNet, which applies CG block to effectively and efficiently capture contextual information in all stages. The backbone of CGNet is particularly tailored for increasing segmentation accuracy.
We elaborately design the architecture of CGNet to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks.
In this section, we introduce related work on semantic segmentation, including small semantic segmentation models and contextual information models, as well as related work on attention models.
Small semantic segmentation models: Small semantic segmentation models require making a good trade-off on accuracy and model parameters or memory footprint. ENet  proposes to discard the last stage of the model and shows that semantic segmentation is feasible on embedded devices. However, ICNet  proposes a compressed-PSPNet-based image cascade network to speed up the semantic segmentation. More recent ESPNet  introduces a fast and efficient convolutional network for semantic segmentation of high-resolution images under resource constraints. Most of them follow the design principles of image classification, which makes them have poor segmentation accuracy.
Contextual information models: Recent works [7, 11, 32, 34] have shown that contextual information is helpful for models to predict high-quality segmentation results. One direction is to enlarge the receptive field of filter or construct a specific module to capture contextual information. Dilation8  employs multiple dilated convolutional layers after class likelihood maps to exercise multi-scale context aggregation. SAC  proposes a scale-adaptive convolution to acquire flexiblesize receptive fields. DeepLab-v3  employs Atrous Spatial Pyramid Pooling  to capture useful contextual information with multiple scales. Following this, the work  introduces DenseASPP to connect a set of atrous convolutional layers for generating multi-scale features. However, the work  proposes a Global-residual Refinement Network through exploiting global contextual information to predict the parsing residuals. PSPNet  introduces four pooling branches to exploit global information from different subregions. By contrast, some other approaches directly construct information propagation model. SPN  constructs a row/column linear propagation model to capture dense and global pairwise relationships in an image, and PSANet  proposes to learn the adaptively point-wise context by employing bi-directional information propagation. Another direction is to use Conditional Random Fields to model the long-range dependencies. CRFasRNN 
reformulates DenseCRF with pairwise potential functions and unrolls the mean-field steps as recurrent neural networks, which composes a uniform framework and can be learned end-to-end. Differently, DeepLab frameworks use DenseCRF  as post-processing. After that, many approaches combine CRFs and DCNNs in a uniform framework, such as combining Gaussian CRFs  and specific pairwise potentials . More recently, CCL  proposes a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. DPC 
proposes to search for efficient multi-scale architectures by using architecture search techniques. Most of these works explore context information in the decoder phase and ignore surrounding context, since they take classification network as the backbone of the segmentation model. In contrast, the proposed approach proposes to learn the joint feature of both local feature and surrounding context feature in the encoder phase, which is more representative for semantic segmentation than the feature extracted by the classification network.
Attention models: Recently, attention mechanism has been widely used for increasing model capability. RNNsearch  proposes an attention model that softly weighs the importance of input words when predicting a target word for machine translation. Following this, SA  proposes an attention mechanism that learns to softly weigh the features from different input scales when predicting the semantic label of a pixel. SENet  proposes to recalibrate channel-wise feature responses by explicitly modeling interdependencies between channels for image classification. More recently, NL 
proposes to compute the response at a position as a weighted sum of the features at all positions for video classification. In contrast, we introduce the attention mechanism into semantic segmentation. Our proposed CG block uses the global contextual information to compute a weight vector, which is employed to refine the joint feature of both local feature and surrounding context feature.
In this work, we develop CGNet, a light-weight neural network for semantic segmentation on mobile devices. In this section, we first elaborate the important component CG block. Then we demonstrate the architecture of CGNet. Finally, we compare CG block with similar units.
The CG block is inspired by the human visual system, which depends on contextual information to understand the scene. As shown in Fig. 3 (a), suppose the human visual system tries to recognize the yellow region, which is difficult if we only pay attention to this region itself. In Fig. 3 (b), we define the red region as the surrounding context of the yellow region. If both the yellow region and its surrounding context are obtained, it is easier to assign the category to the yellow region. Therefore, the surrounding context is helpful for semantic segmentation. For Fig. 3 (c), if the human visual system further captures the global context of the whole scene (purple region) along with the yellow region and its surrounding context (red region), it has a higher degree of confidence to categorize the yellow region. Therefore, both surrounding context and global context are helpful for improving the segmentation accuracy.
3 Conv (stride=2)
|stage 1||33 Conv (stride=1)||32||340 340|
|33 Conv (stride=1)||32||340 340|
|stage 2||CG block (r=2) M||64||170 170|
|stage 3||CG block (r=4) N||128||85 85|
|11 Conv(stride=1)||19||85 85|
Based on the above analysis, we introduce CG block to take full advantage of local feature, surrounding context and global context. CG block consists of a local feature extractor , a surrounding context extractor , a joint feature extractor , and a global context extractor , as shown in Fig. 3 (d). CG block contains two main steps. In the first step, and is employed to learn local feature and the corresponding surrounding context respectively. is instantiated as 3 3 standard convolutional layer to learn the local feature from the 8 neighboring feature vectors, corresponding to the yellow region in Fig. 3 (a). Meanwhile, is instantiated as a 3 3 atrous/dilated convolutional layer since atrous/dilated convolution has a relatively large receptive field to learn the surrounding context efficiently, corresponding to the red region in Fig. 3 (b). Thus obtains the joint feature from the output of and . We simply designextracts global context to improve the joint feature. Inspired by SENet , the global context is treated as a weighted vector and is applied to channel-wisely refine the joint feature, so as to emphasize useful components and suppress useless one. In practice, we instantiate as a global average pooling layer to aggregate the global context corresponding to the purple region in Fig. 3
(c), followed by a multilayer perceptron to further extract the global context. Finally, we employ a scale layer to re-weight the joint feature with the extracted global context. Note that the refining operation ofis adaptive for the input image since the extracted global context is generated from the input image.
Furthermore, the proposed CG block employs residual learning 
which helps to learn highly complex features and to improve gradient back-propagation during training. There are two types of residual connection in the proposed CG block. One is local residual learning (LRL), which connects input and the joint feature extractor. The other is global residual learning (GRL), which bridges input and the global feature extractor . Fig. 4 (a) and (b) show these two cases, respectively. Intuitively, GRL has a stronger capability than LRL to promote the flow of information in the network.
|Method||Mean IoU (%)|
|Method||Mean IoU (%)|
Based on the proposed CG block, we elaborately design the structure of CGNet to reduce the number of parameters, as shown in Fig. 5. CGNet follows the major principle of “deep and thin” to save memory footprint as much as possible. Different from frameworks transferred from deep image classification networks which contain hundreds of layers and thousands of channel numbers, CGNet only consists of 51 convolutional layers with small channel numbers. In order to better preserve the discriminative spatial information, CGNet has only three down-sampling stages and obtains 1/8 feature map resolution, which is much different from mainstream segmentation networks with five down-sampling stages and 1/32 feature map resolution. The detailed architecture of our proposed CGNet is presented in Tab. 1. In stage 1, we stack only three standard convolutional layers to obtain the feature map of 1/2 resolution, while in stage 2 and 3, we stack and CG blocks to downsample the feature map to 1/4 and 1/8 of the input image respectively. For stage 2 and 3, the input of their first layer are gained from combining the first and last blocks of their previous stages, which encourages feature reuse and strengthen feature propagation. In order to improve the flow of information in CGNet, we take the input injection mechanism which additionally feeds 1/4 and 1/8 downsampled input image to stage 2 and stage 3 respectively. Finally, a 1 1 convolutional layer is employed to produce the segmentation prediction.
Note that CG block is employed in all units of stage 2 and 3, which means CG block is utilized almost in all the stages of CGNet. Therefore, CGNet has the capability of aggregating contextual information from bottom to top, in both semantic level from deep layers and spatial level from shallow layers. Compared with existing segmentation frameworks which ignore the contextual information or only capture contextual information from the semantic level by performing context module after the encoding stage, the structure of CGNet is elaborately tailored for semantic segmentation to improve the accuracy.
Furthermore, in order to further reduce the number of parameters, feature extractor and employ channel-wise convolutions, which remove the computational cost across channels and save much memory footprint. The previous work  employs 1 1 convolutional layer followed channel-wise convolutions for promoting the flow of information between channels. However, this design is not suitable for the proposed CG block, since the local feature and the surrounding context in CG block need to maintain channel independence. Additional experiments also verify this observation.
|Method||Input Injection||Mean IoU (%)|
|Method||Activation||Mean IoU (%)|
ENet unit  employs a main convolutional layer to extract single-scale feature, which results in lacking of local features in the deeper layers of the network and lacking surrounding context at the shallow layers of the network. MobileNet unit  employs a depth-wise separable convolution that factorizes standard convolutions into depth-wise convolutions and point-wise convolutions. Our proposed CG block can be treated as the generalization of MobileNet unit. When and , our proposed CG block will degenerate to MobileNet unit. ESP unit  employs K parallel dilated convolutional kernels with different dilation rates to learn multi-scale features. Inception unit  is proposed to approximate a sparse structure and process multi-scale visual information for image classification. CCL unit  leverages the informative context and spotlights the local information in contrast to the context, which is proposed to learn locally discriminative features form block3, block4 and block5 of ResNet-101, and fuse different scale features through gated sum scheme in the decoder phase. In contrast to them, the CG block is proposed to learn the joint feature of both local feature and surrounding context feature in the encoder phase.
In this section, we evaluate the proposed CGNet on Cityscapes  and CamVid. Firstly, we introduce the datasets and the implementation protocol. Then the contributions of each component are investigated in ablation experiments on Cityscapes validation set. Finally, we perform comprehensive experiments on Cityscapes and CamVid benchmarks and compare with the state-of-the-art works to verify the effectiveness of CGNet.
|Parameters (M)||Mean IoU (%)|
The Cityscapes dataset contains 5, 000 images collected in street scenes from 50 different cities. The dataset is divided into three subsets, including 2, 975 images in training set, 500 images in validation set and 1, 525 images in testing set. High-quality pixel-level annotations of 19 semantic classes are provided in this dataset. Segmentation performances are reported using the commonly Intersection-over-Union (IoU).
The CamVid is a road scene dataset from the perspective of a driving automobile. The dataset involves 367 training images, 101 validation images and 233 testing images. The images have a resolution of 480 360. The performance is measured by pixel intersection-over-union (IoU) averaged across the 11 classes.
All the experiments are performed on the PyTorch platform. We employ the “poly” learning rate policy, in which we set base learning rate toand power to . For optimization, we use ADAM  with batch size 14, betas=(0.9, 0.999), and weight decay in training. For data augmentation, we employ random mirror, the mean subtraction and random scale on the input images to augment the dataset during training. The random scale contains . The iteration number is set to
for Cityscapes and CamVid. Our loss function is the sum of cross-entropy terms for each spatial position in the output score map, ignoring the unlabeled pixels.
|Residual connections||Mean IoU (%)|
|1x1 Conv||Mean IoU (%)|
We adopt three schemes to evaluate the effectiveness of surrounding context extractor . (1) No: CGNet_M3N15 model does not employ , and is configured with the same number of parameters by increasing the number of channels. (2) Single: the surrounding context extractor is employed only in the last block of the framework. (3) Full: the surrounding context extractor is employed in all blocks of the framework. Results are shown in Tab. 2. It is clear that the second and third schemes can improve the accuracy by and respectively, which shows surrounding context is very beneficial for increasing segmentation accuracy and should be employed in all blocks of the framework.
|FLOPS (G)||Parameters (M)||Memory (M)||mean IoU (%)|
Accuracy, parameter and memory analysis. FLOPS and Memory are estimated for an input of 3640360. “-” indicates the approaches do not report the corresponding results, “MS” indicates employing multi-scale inputs with average fusion during testing.
|mean IoU (%)||ms||fps|
We use global context to refine the joint feature learned by . As shown in Tab. 3, global context extractor can improve the accuracy from to , which demonstrates that the global context extractor is desirable for the proposed approach.
We take input injection mechanism that refers to down-sampling the input image to the resolution of stage 2 and stage 3, and injecting them into the corresponding stage. As shown in Tab. 4, this mechanism can improve the accuracy from to . Intuitively, this performance improvement comes from Input Injection mechanism which increases the flow of information on the network.
We train the proposed CGNet with different block nums at each stage and shows the trade-offs between the accuracy and the number of parameters, as shown in Tab. 6. In general, deep networks perform better than shallow ones at the expense of increased computational cost and model size. From Tab. 6, we can find that segmentation accuracy does not increase as M increases when fixing N. For example, we fix and change M from 3 to 6, the mean IoU drops by 0.2 points. So we set (the number of CG blocks in stage 2) for CGNet. Furthermore, we compromise between accuracy and model size by setting different N (the number of CG blocks in stage 3). Our approach achieves the highest mean IoU of 63.5% on Cityscapes validation set when M=3, N=21.
Inspired by , residual learning is employed in CG block to further improve the information flow. From Tab. 7, compared with LRL, we can find that GRL can improve the accuracy from to . One possible reason is that the GRL has a stronger ability to promote the flow of information in the network, so we choose GRL in the proposed CG block.
Previous work  employs a 11 convolution followed by channel-wise convolution to improve the flow of information between channels and promote inter-channel interaction. Here, We try the 11 convolution in CG block but find it damage the segmentation accuracy. As shown in Tab. 8, we can improve the accuracy from to by removing the 11 convolutions. In other words, this interaction mechanism in our CG block hampers the accuracy of our models severely. One possible reason is that the local feature and the surrounding context feature need to maintain channel independence.
Tab. 9 reports a comparison of FLOPS (floating point operations), memory footprint and parameters of different models. The efficiency of CGNet_M3N21 is evident compared to current smallest semantic segmentation model. The number of parameters of CGNet_M3N21 is close to ENet , and our method is 6.5% higher than it in mean IoU. Furthermore, the accuracy of our approach is 4.5% higher than the very recent model ESPNet . With such a few parameters and memory footprint, CGNet is very suitable to be deployed in mobile devices. Furthermore, compared with deep and state-of-the-art semantic segmentation networks, CGNet_M3N21 is 131 and 57 times smaller than PSPNet  and DenseASPP , while its category-wise accuracy is only 5.4% and 5.5% less respectively.
|Method||Pretrain||Parameters (M)||mIoU cat (%)||mIoU cla (%)|
BiSeNet_MS (Xception) 
|BiSeNet_MS (ResNet-18) ||ImageNet||27.0||-||77.7|
|PSPNet _MS ||ImageNet||65.7||90.6||78.4|
|ENet ||From scratch||0.4||80.4||58.3|
|ESPNet ||From scratch||0.4||82.2||60.3|
|FRRN ||From scratch||17.7||-||63|
|Method||Parameters (M)||mean IoU (%)|
We report inference speed of our proposed CGNet_M3N21 on Cityscapes test set and compare with other state-of-the-art methods in Tab. 10. For fairness, we reimplement these methods on Tesla K80, since some of them do not report their running time, many of them may have adopted very time-consuming multi-scale testing for high accuracy, and some of them reported their speeds on different hardware platforms. As shown in Tab. 10, current high-accuracy model PSPNet  and DenseASPP  takes more than 0.5 second to predict segmentation result on Tesla K80 GPU card during testing. In contrast, our method achieves 50 fps with negligible precision sacrifice. Furthermore, compared with the previous small memory footprint model, e.g. SegNet, ENet, ICNet, BiSeNet, our method improves significantly in terms of inference speed. Note that the speed of our approach is higher than the very recent model ESPNet , moreover, the accuracy of our approach is 4.5% higher than it. For inference time, our model can run at 50 fps on 20481024 high-resolution images using only one Tesla K80 GPU card.
We report the evaluation results of the proposed CGNet_M3N21 on Cityscapes test set and compare to other state-of-the-art methods in Tab. 11. Without any pre-processing, post-processing, or any modules (such as ASPP , PPM ), our CGNet_M3N21 achieves in terms of mean IoU (only training on fine annotated images). Note that we do not employ any testing tricks, like multi-scale or complex upsampling. We list the number of model parameters and the segmentation accuracy in Tab. 11. As shown in Tab. 11, compared the methods that do not require pretraining on ImageNet, our CGNet_M3N21 achieves a relatively large accuracy gain. For example, the mean IoU of proposed CGNet_M3N21 is about 6.5% higher than ENet  with almost no increase of the model parameters. Besides, it is even quantitatively better than the methods that are pretrained on ImageNet without consideration of memory footprint and speed, such as SegNet , and the model parameters of CGNet_M3N21 is about 60 times smaller than it. We visualize some segmentation results on the validation set of Cityscapes in Fig. 6. Tab. 12 shows the accuracy result of the proposed CGNet_M3N21 on CamVid dataset. We use the training set and validation set to train our model. Here, we use 480360 resolution for training and evaluation. The number of parameters of CGNet_M3N21 is close to the current smallest semantic segmentation model ENet , and the accuracy of our method is 14.3% higher than it.
In this paper, we rethink semantic segmentation from its characteristic which involves image recognition and object localization. Furthermore, we propose a novel Context Guided block for learning the joint feature of both local feature and the surrounding context. Based on Context Guided block, we develop a light-weight Context Guided Network for semantic segmentation, and our model allows very memory-efficient inference, which significantly enhances the practicality of semantic segmentation in real-world scenarios. Furthermore, our approach achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and runs at 50 fps on 2048 1024 high-resolution images.
The cityscapes dataset for semantic urban scene understanding.In CVPR, 2016.