Prior Guided Feature Enrichment Network for Few-Shot Segmentation

08/04/2020 ∙ by Zhuotao Tian, et al. ∙ The Chinese University of Hong Kong 0

State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results and hardly work on unseen classes without fine-tuning. Few-shot segmentation is thus proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples. Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information of training classes and spatial inconsistency between query and support targets. To alleviate these issues, we propose the Prior Guided Feature Enrichment Network (PFENet). It consists of novel designs of (1) a training-free prior mask generation method that not only retains generalization power but also improves model performance and (2) Feature Enrichment Module (FEM) that overcomes spatial inconsistency by adaptively enriching query features with support features and prior masks. Extensive experiments on PASCAL-5^i and COCO prove that the proposed prior generation method and FEM both improve the baseline method significantly. Our PFENet also outperforms state-of-the-art methods by a large margin without efficiency loss. It is surprising that our model even generalizes to cases without labeled support samples. Our code is available at



There are no comments yet.


page 1

page 3

page 4

page 5

page 9

page 11

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Rapid development of deep learning has brought significant improvement to semantic segmentation. The iconic frameworks

[pspnet, deeplab] have profited a wide range of applications of automatic driving, robot vision, medical image, etc. The performance of these frameworks, however, worsens quickly without sufficient fully-labeled data or when working on unseen classes. Even if additional data is provided, fine-tuning is still time- and resource-consuming.

To address this issue, few-shot segmentation was proposed [shaban] where data is divided into a support set and a query set. As shown in Figure 1

, images from both support and query sets are first sent to the backbone network to extract features. Feature processing can be accomplished by generating weights for the classifier

[shaban, diffentiable]

, cosine-similarity calculation

[prototype, panet, partaware], or convolutions [AttantionMCG, canet, mmm, SimPropNet, texturebias] to generate the final prediction.

The support set provides information about the target class that helps the model to make accurate segmentation prediction on the query images. This process mimics the scenario where a model makes the prediction of unseen classes on testing images (query) with few labeled data (support). Therefore, a few-shot model needs to quickly adapt to the new classes. However, the common problems of existing few-shot segmentation methods include generalization loss due to misuse of high-level features and spatial inconsistency between the query and support samples. In this paper, we mainly tackle these two difficulties.

Generalization Reduction & High-level Features   Common semantic segmentation models rely heavily on high-level features with semantic information. Experiments of CANet [canet]

show that simply adding high-level features during feature processing in a few-shot model causes performance drop. Thus the way to utilize semantic information in the few-shot setting is not straightforward. Unlike previous methods, we use ImageNet

[imagenet] pre-trained high-level features of the query and support images to produce ‘priors’ for the model. These priors help the model to better identify targets in query images. Since the prior generation process is training-free, the resulting model does not lose the generalization ability to unseen classes, despite the frequent use of high-level information of seen classes during training.

Fig. 1: Summary of recent few-shot segmentation frameworks. The backbone method used to extract support and query features can be either a single shared network or two Siamese networks.

Spatial Inconsistency   Besides, due to the limited samples, scale and pose of each support object may vary greatly from its query target, which we call spatial inconsistency. To tackle this problem, we propose a new module named Feature Enrichment Module (FEM) to adaptively enrich query features with the support features. Ablation study in Section 4.3

shows that merely incorporating the multi-scale scheme to tackle the spatial inconsistency is sub-optimal by showing that FEM provides conditioned feature selection that helps retain essential information passed across different scales. FEM achieves superior performance than other multi-scale structures, such as HRNet

[hrnet_pami], PPM [pspnet], ASPP [aspp] and GAU [Zhang_2019_ICCV].

Finally, based on the proposed prior generation method and Feature Enrichment Module (FEM), we build a new network – Prior Guided Feature Enrichment Network (PFENet). The ResNet-50 based PFENet only contains 10.8 M learnable parameters, and yet achieves new state-of-the-art results on both PASCAL-5 [shaban] and COCO [coco] benchmark with 15.9 and 5.1 FPS with 1-shot and 5-shot settings respectively. Moreover, we manifest the effectiveness by applying our model to the zero-shot scenario where no labeled data is available. The result is surprising – PFENet sill achieves decent performance without major structural modification.

Our contribution in this paper is threefold:

  • We leverage high-level features and propose training-free prior generation to greatly improve prediction accuracy and retain high generalization.

  • By incorporating the support feature and prior information, our FEM helps adaptively refine the query feature with the conditioned inter-scale information interaction.

  • PFENet achieves new state-of-the-art results on both PASCAL-5 and COCO datasets without compromising efficiency.

2 Related Work

2.1 Semantic Segmentation

Semantic segmentation is a fundamental topic to predict the label for each pixel. The Fully Convolutional Network (FCN) [fcn] is developed for semantic segmentation by replacing the fully-connected layer in a classification framework with convolutional layers. Following approaches, such as DeepLab [deeplab], DPN [liu2015semantic] and CRF-RNN [zheng2015conditional], utilize CRF/MRF to help refine coarse prediction. The receptive field is important for semantic segmentation; thus DeepLab [deeplab] and Dilation [yu2016multi] introduce the dilated convolution to enlarge the receptive field. Encoder-decoder structures [unet, ghiasi2016laplacian, lin2017refine] are adopted to help reconstruct and refine segmentation in steps.

Contextual information is vital for complex scene understanding. ParseNet [liu2015parsenet] applies global pooling for semantic segmentation. PSPNet [pspnet] utilizes a Pyramid Pooling Module (PPM) for context information aggregation over different regions, which is very effective. DeepLab [deeplab]

develops atrous spatial pyramid pooling (ASPP) with filters in different dilation rates. Attention models are also introduced. PSANet 

[psanet] develops point-wise spatial attention with a bi-directional information propagation paradigm. Channel-wise attention [encnet] and non-local style attention [cooccurant, danet, yuan2018ocnet, ccnet] are also effective for segmentation. These methods work well on large-sample classes. They are not designed to deal with rare and unseen classes. They also cannot be easily adapted without fine-tuning.

2.2 Few-shot Learning

Few-shot learning aims at image classification when only a few training examples are available. There are meta-learning based methods [memory_match, dynamic_noforget, maml] and metric-learning ones [matchingnet, relationnet, prorotypenet, deepemd]. Data is essential to deep models; therefore, several methods improve performance by synthesizing more training samples [hallucinate_saliency, hallucinating, imaginary]. Different from few-shot learning where prediction is at the image-level, few-shot segmentation makes pixel-level predictions, which is much more challenging.

Our work closely relates to metric-learning based few-shot learning methods. Prototypical network [prorotypenet] is trained to map input data to a metric space where classes are represented as prototypes. During inference, classification is achieved by finding the closest prototype for each input image, because data belonging to the same class should be close to the prototype. Another representative metric-based work is the relation network [relationnet] that projects query and support images to 1

1 vectors and then performs classification based on the cosine similarity between them.

2.3 Few-shot Segmentation

Few-shot segmentation places the general semantic segmentation in a few-shot scenario, where models perform dense pixel labeling on new classes with only a few support samples. OSLSM [shaban] first tackles few-shot segmentation by learning to generate weights of the classifier for each class. PL [prototype] applies prototyping [prorotypenet] to the segmentation task. It learns a prototype for each class and calculates the cosine similarity between pixels and prototypes to make the prediction. More recently, CRNet [crnet] processes query and support images through a Siamese Network followed by a Cross-Reference Module to mine cooccurrent features in two images. PANet [panet] introduces prototype alignment regularization that encourages the model to learn consistent embedding prototypes for better performance, and CANet [canet] uses the iterative optimization module on the merged query and support feature to iteratively refine results.

Similar to CANet [canet], we use convolution to replace the cosine similarity that may not well tackle complex pixel-wise classification in the segmentation task. However, different from CANet, our baseline model uses fewer convolution operations and still achieves decent performance.

As discussed before, these few-shot segmentation methods do not sufficiently consider generalization loss and spatial inconsistency. Unlike PGNet [Zhang_2019_ICCV] that uses a graph-based pyramid structure to refine results via Graph Attention Unit (GAU) followed by three residual blocks and an ASPP [aspp], we instead incorporate a few basic convolution operations with the proposed prior masks and FEM in a multi-scale structure to accomplish decent performance.

Boat Cow Motor Train Tv/Monitor Potted Plant Bus Person
Fig. 2: Illustration of the training-free prior generation. Top: support images with the masked area in the target class. Middle: query images. Bottom: prior masks of query images where the regions of interest are highlighted.

3 Our Method

In this section, we first briefly describe the few-shot segmentation task in Section 3.1. Then, we present the prior generation method and the Feature Enrichment Module (FEM) in Sections 3.2 and 3.3 respectively. Finally, in Section 3.4, details of our proposed Prior Guided Feature Enrichment Network (PFENet) are discussed.

3.1 Task Description

A few-shot semantic segmentation system has two sets, i.e., the query set and support set . Given samples from support set , the goal is to segment the area of unseen class from each query image in the query set.

Models are trained on classes (base) and tested on previously unseen classes (novel) in episodes . The episode paradigm was proposed in [matchingnet] and was first applied to few-shot segmentation in [shaban]. Each episode is formed by a support set and a query set of the same class . The support set consists of samples of class , which we call ‘K-shot scenario’. The -th support sample is a pair of where and are the support image and label of respectively. For the query set, where is the input query image and is the ground truth mask of class . The query-support pair forms the input data batch to the model. The ground truth of the query image is invisible to the model and is used to evaluate the prediction on the query image in each episode.

Fig. 3: Overview of our Prior Guided Feature Enrichment Network with the prior generation and Feature Enrichment Module. White blocks marked with and

represent the high- and middle-level features extracted from backbone respectively.

3.2 Prior for Few-Shot Segmentation

3.2.1 Important Observations

CANet [canet] outperforms previous work by a large margin on the benchmark PASCAL-5 dataset by extracting only middle-level features from the backbone (e.g., conv3_x and conv4_x of ResNet-50). Experiments in CANet also show that the high-level (e.g., conv5_x of ResNet-50) features lead to performance reduction. It is explained in [canet] that the middle-level feature performs better since it constitutes object parts shared by unseen classes, but our alternative explanation is that the semantic information contained in the high-level feature is more class-specific than the middle-level feature, indicating that the former is more likely to negatively affect model’s generalization power to unseen classes. In addition, higher-level feature directly provides semantic information of the training classes , contributing more in identifying pixels belonging to and reducing the training loss than the middle-level information. Consequently, such behavior results in a preference for . The lack of generalization and the preference for the training classes are both harmful for evaluation on unseen test classes .

It is noteworthy that contrary to the finding that high-level feature adversely affects performance in few-shot segmentation, prior segmentation frameworks [icnet, unet] exploit these features to provide semantic cues for final prediction. This contradiction motivates us to find a way to make use of high-level information in a training-class-insensitive way to boost performance in few-shot segmentation.

3.2.2 Prior Generation

In our work, we transform the ImageNet [imagenet]

pre-trained high-level feature containing semantic information into a prior mask that tells the probability of pixels belonging to a target class as shown in Figure

2. During training, the backbone parameters are fixed as those in [panet, canet]. Therefore, the prior generation process does not bias towards training classes and upholds class-insensitivity during the evaluation on unseen test classes . Let denote the input query and support images, denote the binary support mask, denote the backbone network, and denote the high-level query and support features. We have


where is the Hadamard product – the sizes of and are both . Note that the output of

is processed with a ReLU function. So the binary support mask

removes the background in support feature by setting it to zero.

Specifically, we define the prior of query feature as the mask that reveals the pixel-wise correspondence between and . A pixel of query feature with a high value on means that this pixel has a high correspondence with at least one pixel in support feature. Thus, it is very likely to be in the target area of the query image. By setting the background on support feature to zero, pixels of query feature yield no correspondence with the background on support feature – they only correlate with the foreground target area. To generate , we first calculate the pixel-wise cosine similarity between feature vectors of and as


For each , we take the maximum similarity among all support pixels as the correspondence value as


Then we produce the prior mask by reshaping into . We process with a min-max normalization (Eq. (5)) to normalize the values to between 0 and 1, as shown in Figure 2. In Eq. (5), is set to in our experiments.


The key point of our proposed prior generation method lies in the use of fixed high-level features to yield the prior mask by taking the maximum value from a similarity matrix of size as given in Eqs. (2) and (3), which is rather simple and effective. Ablation study comparing other alternative methods used in [panet, weighingboosting, SG-One] in Section 4.4 demonstrates the superiority of our method.

3.3 Feature Enrichment Module

3.3.1 Motivation

Existing few-shot segmentation frameworks [canet, shaban, AttantionMCG, panet, weighingboosting, guide, adaptivemaskweightimprinting, prototype] use masked global average pooling for extracting class vectors from support images before further processing. However, global pooling on support images results in spatial information inconsistency since the area of query target may be much larger or smaller than support samples. Therefore, using a global pooled support feature to directly match each pixel of the query feature is not ideal.

A natural alternative is to add PPM [pspnet] or ASPP [aspp] to provide multi-level spatial information to the feature. PPM and ASPP help the baseline model yield better performance (as demonstrated in our later experiments). However, these two modules are suboptimal in that: 1) they provide spatial information to merged features without specific refinement process within each scale; 2) the hierarchical relations across different scales are ignored.

To alleviate these issues, we disentangle the multi-scale structure and propose the feature enrichment module (FEM) to 1) horizontally interact the query feature with the support features and prior masks in each scale, and 2) vertically leverage the hierarchical relations to enrich coarse feature maps with essential information extracted from the finer feature via a top-down information path. After horizontal and vertical optimization, features projected into different scales are then collected to form the new query feature. Details of FEM are as follows.

Fig. 4: Visual illustration of FEM (dashed box) with four scales and a top-down path. C, 1x1 and Circled M represent concatenation, 1

1 convolution and inter-scale merging module respectively. Activation functions are ReLU.

3.3.2 Module Structure

As shown in Figure 3, the feature enrichment module (FEM) takes the query feature, prior mask and support feature as input. It outputs the refined query feature with enriched information from the support feature. The enrichment process can be divided into three sub-processes of 1) inter-source enrichment that first projects input to different scales and then interacts the query feature with support feature and prior mask in each scale independently; 2) inter-scale interaction that selectively passes essential information between merged query-support features across different scales; and 3) information concentration that merges features in different scales to finally yield the refined query feature. An illustration of FEM with four scales and a top-down path for inter-scale interaction is shown in Figure 4.

Fig. 5: Visual illustration of the inter-scale merging module . C is concatenation and is pixel-wise addition. means 11 convolution and represents two 33 convolutions. Activation functions are ReLU. For features that do not have auxiliary features, there is no concatenation with the auxiliary feature and the refined feature is produced only by the main feature with and .

Inter-Source Enrichment   In FEM, denotes different spatial sizes for average pooling. They are in the descending order . The input query feature is first processed with adaptive average pooling to generate sub-query features of different spatial sizes . spatial sizes make the global-average pooled support feature be expanded to different feature maps (), and the prior is accordingly resized to ().

Then, for , we concatenate , and , and process each concatenated feature with convolutions to generate the merged query features as


where represents the 11 convolution that yields the merged feature with output channels.

Inter-Scale Interaction   It is worth noting that tiny objects may not exist in the down-sampled feature maps. A top-down path adaptively passing information from finer features to the coarse ones is conducive to building a hierarchical relationship within our feature enrichment module. Now the interaction is between not only the query and support features in each scale (horizontal), but also the merged features of different scales (vertical), which is beneficial to the overall performance.

The circled in Figure 4 represents the inter-scale merging module that interacts between different scales by selectively passing useful information from the auxiliary feature to the main feature to generate the refined feature . This process can be written as


where is the main feature and is the auxiliary feature for the -th scale . For example, in an FEM with a top-down path for inter-scale interaction, finer feature (auxiliary) needs to provide additional information to the coarse feature (main) . In this case, and . Other alternatives for inter-scale interaction include the bottom-up path that enriches finer features (main) with information coming from the coarse ones (auxiliary), and the bi-directional variants, i.e., a top-down path followed by a bottom-up path, and a bottom-up path followed by a top-down path. The top-down path shows its superiority in Section 4.3.1.

The specific structure of the inter-scale merging module is shown in Figure 5. We first resize the auxiliary feature to the same spatial size as the main feature. Then we use a 11 convolution to extract useful information from the auxiliary feature conditioned on the main feature. Two 33 convolutions followed are used to finish the interaction and output the refined feature. The residual link within the inter-scale merging module is used for keeping the integrity of the main feature in the output feature . For those features that do not have auxiliary features (e.g., the first merged feature in the top-down path and the last merged feature in the bottom-up path), we simply ignore the concatenation with the auxiliary feature in – the refined feature is produced only by the main feature.

Fig. 6: Visual illustration of the baseline structure that processes features in the original spatial size of the input features.

Information Concentration   After inter-scale interaction, refined feature maps are obtained as . Finally, the output query feature

is formed by interpolation and concatenation of

refined feature maps followed by an 11 convolution as


The visual illustration of the baseline model without FEM () is shown in Figure 6. To encourage better feature enrichment, we add intermediate supervision by attaching classification head (Figure 7(b)) to each .

In summary, by incorporating the pooled support features and prior masks to query features with different spatial sizes, the model learns to adaptively enrich the query feature with information coming from the support feature at each location under the guidance of prior mask and supervision of ground-truth. Moreover, the vertical inter-scale interaction supplements the main feature with the conditioned information provided by the auxiliary feature. Therefore, FEM yields greater performance gain on baseline than other feature enhancement designs (e.g., PPM [pspnet], ASPP [aspp] and GAU [Zhang_2019_ICCV]). Experiments in Section 4.3 provide more details.

3.4 Prior Guided Feature Enrichment Network

3.4.1 Model Description

Based on the proposed prior generation method and the feature enrichment module (FEM), we propose the Prior Guided Feature Enrichment Network (PFENet) as shown in Figure 3. The ImageNet [imagenet] pre-trained CNN is shared by support and query images to extract features. The extracted middle-level support and query features are processed by 11 convolution to reduce the channel number to 256.

After feature extraction and channel reduction, the feature enrichment module (FEM) enriches the query feature with the support feature and prior mask. On the output feature of FEM, we apply a convolution block (Figure 7(a)) followed by a classification head to yield the final prediction. Classification head is composed of one 33 convolution and 11 convolution with Softmax function as shown in Figure 7(b). For all backbone networks, we use the outputs of the last layers of conv3_x and conv4_x as middle-level features to generate the query and support features by concatenation, and take the output of the last layer of conv5_x as high-level features to produce the prior mask.

In the 5-shot setting, we simply take the average of 5 pooled support features as the new support feature before concatenation with the query feature. Similarly, the final prior mask before the concatenation in FEM is also obtained by averaging five prior masks produced by one query feature with different support features.

(a) (b)
Fig. 7: Structures of (a) convolution block and (b) classification head.

3.4.2 Loss Function

We select the cross entropy loss as our loss function. As shown in Section

3.3.2 and Figure 3, for a FEM with different spatial sizes, the intermediate supervision on () generates losses (). The final prediction of PFENet generates the second loss . The total loss is the weighted sum of and as


where is used to balance the effect of intermediate supervision. We empirically set to 1.0 in all experiments.

Methods 1-Shot 5-Shot Params
Fold-0 Fold-1 Fold-2 Fold-3 Mean Fold-0 Fold-1 Fold-2 Fold-3 Mean
VGG-16 Backbone
OSLSM [shaban] 33.6 55.3 40.9 33.5 40.8 35.9 58.1 42.7 39.1 44.0 276.7M
co-FCN [co-FCN] 36.7 50.6 44.9 32.4 41.1 37.5 50.0 44.1 33.9 41.4 34.2M
SG-One [SG-One] 40.2 58.4 48.4 38.4 46.3 41.9 58.6 48.6 39.4 47.1 19.0M
AMP [adaptivemaskweightimprinting] 41.9 50.2 46.7 34.7 43.4 41.8 55.5 50.3 39.9 46.9 34.7M
PANet [panet] 42.3 58.0 51.1 41.2 48.1 51.8 64.6 59.8 46.5 55.7 14.7M
FWBF [weighingboosting] 47.0 59.6 52.6 48.3 51.9 50.9 62.9 56.5 50.1 55.1 -
Ours 56.9 68.2 54.4 52.4 58.0 59.0 69.1 54.8 52.9 59.0 10.4M
ResNet-50 Backbone
CANet [canet] 52.5 65.9 51.3 51.9 55.4 55.5 67.8 51.9 53.2 57.1 19.0M
PGNet [Zhang_2019_ICCV] 56.0 66.9 50.6 50.4 56.0 54.9 67.4 51.8 53.0 56.8 17.2M
Ours 61.7 69.5 55.4 56.3 60.8 63.1 70.7 55.8 57.9 61.9 10.8M
ResNet-101 Backbone
FWBF [weighingboosting] 51.3 64.5 56.7 52.2 56.2 54.8 67.4 62.2 55.3 59.9 -
Ours 60.5 69.4 54.4 55.9 60.1 62.8 70.4 54.9 57.6 61.4 10.8M
TABLE I: Class mIoU results on four folds of PASCAL-5. Params: number of learnable parameters.
Methods 1-Shot 5-Shot Params
VGG-16 Backbone
OSLSM [shaban] 61.3 61.5 272.6M
co-FCN [co-FCN] 60.1 60.2 34.2M
PL [prototype] 61.2 62.3 -
SG-One [SG-One] 63.9 65.9 19.0M
PANet [panet] 66.5 70.7 14.7M
Ours 72.0 72.3 10.4M
ResNet-50 Backbone
CANet [canet] 66.2 69.6 19.0M
PGNet [Zhang_2019_ICCV] 69.9 70.5 17.2M
Ours 73.3 73.9 10.8M
ResNet-101 Backbone
A-MCG [AttantionMCG] 61.2 62.2 86.1M
Ours 72.9 73.5 10.8M
TABLE II: FB-IoU results on PASCAL-5. Our results are single-scale ones without additional post-processing like DenseCRF [densecrf]. As many other methods do not report the specific result of each fold, we present the comparison of the average FB-IoU results in this table.

4 Experiments

4.1 Implementation Details

Datasets   We use the datasets of PASCAL-5 [shaban] and COCO [coco] in evaluation. PASCAL-5 is composed of PASCAL VOC 2012 [pascalvoc2012] and extended annotations from SDS [SDS] datasets. 20 classes are evenly divided into 4 folds and each fold contains 5 classes. Following OSLSM [shaban], we randomly sample 1,000 query-support pairs in each test.

Following [weighingboosting], we also evaluate our model on COCO by splitting four folds from 80 classes. Thus each fold has 20 classes. The set of class indexes contained in fold is written as where . Note that the COCO validation set contains 40,137 images (80 classes), which are much more than the images in PASCAL-5. Therefore, 1,000 randomly sampled query-support pairs used in previous work are not enough for producing reliable testing results on 20 test classes. We instead randomly sample 20,000 query-support pairs during the evaluation on each fold, making the results more stable than testing on 1,000 query-support pairs used in previous work. Stability statistics are shown in Section 4.7.

For both PASCAL-5 and COCO, when testing the model on one fold, we use the other three folds to train the model for cross-validation. We take the average of five testing results with different random seeds for comparison as shown in Tables IX and X.

Experimental Setting

   Our framework is constructed on PyTorch. We select VGG-16

[vgg], ResNet-50 [resnet] and ResNet-101 [resnet] as our backbones for fair comparison with other methods. The ResNet we use is the dilated version used in previous work [weighingboosting, canet, AttantionMCG]. The VGG we use is the original version [vgg]. All backbone networks are initialized with ImageNet [imagenet] pretrained weights. Other layers are initialized by the default setting of PyTorch. We use SGD as our optimizer. The momemtum and weight decay are set to 0.9 and 0.0001 respectively. We adopt the ’poly’ policy [deeplab] to decay the learning rate by multiplying where equals to 0.9.

Our models are trained on PASCAL-5

for 200 epochs as that of

[canet] with learning rate 0.0025 and batch size 4. For experiments on COCO, models are trained for 50 epochs with learning rate 0.005 and batch size 8. Parameters of the backbone network are not updated. During training, samples are processed with mirror operation and random rotation from -10 to 10 degrees. Finally, we randomly crop

patches from the processed images as training samples. During the evaluation, each input sample is resized to the training patch size but with respect to its original aspect ratio by padding zero, then the prediction is resized back to the original label sizes. Finally, we directly output the single-scale results without fine-tuning and any additional post-processing (such as multi-scale testing and DenseCRF

[densecrf]). Our experiments are conducted on an NVIDIA Titan V GPU and Intel Xeon CPU E5-2620 v4 @ 2.10GHz. The code and trained models will be made publicly available.

Evaluation Metrics   Following [canet, weighingboosting]

, we adopt the class mean intersection over union (mIoU) as our major evaluation metric for ablation study since the class mIoU is more reasonable than the foreground-background IoU (FB-IoU) as stated in

[canet]. The formulation follows , where is the number of classes in each fold (e.g., for COCO and for PASCAL-5) and is the intersection-over-union of class . We also report the results of FB-IoU for comparison with other methods. For FB-IoU calculation on each fold, only foreground and background are considered (). We take average of results on all folds as the final mIoU/FB-IoU.

(a) Support

(b) Query

(c) GT

(d) Baseline

(e) PFENet

Fig. 8: Qualitative results of the proposed PFENet and the baseline. The left samples are from COCO and the right ones are from PASCAL-5. From top to bottom: (a) support images, (b) query images, (c) ground truth of query images, (d) predictions of baseline, (e) predictions of PFENet.
Methods Backbone 1-Shot 5-Shot
Fold-0 Fold-1 Fold-2 Fold-3 Mean Fold-0 Fold-1 Fold-2 Fold-3 Mean
Class mIoU Evaluation
FWBF [weighingboosting] VGG-16 18.4 16.7 19.6 25.4 20.0 20.9 19.2 21.9 28.4 22.6
Ours VGG-16 33.4 36.0 34.1 32.8 34.1 35.9 40.7 38.1 36.1 37.7
PANet [panet] VGG-16 - - - - 20.9 - - - - 29.7
Ours VGG-16 35.4 38.1 36.8 34.7 36.3 38.2 42.5 41.8 38.9 40.4
FWBF [weighingboosting] ResNet-101 19.9 18.0 21.0 28.9 21.2 19.1 21.5 23.9 30.1 23.7
Ours ResNet-101 34.3 33.0 32.3 30.1 32.4 38.5 38.6 38.2 34.3 37.4
Ours ResNet-101 36.8 41.8 38.7 36.7 38.5 40.4 46.8 43.2 40.5 42.7
FB-IoU Evaluation
PANet [panet] VGG-16 - - - - 59.2 - - - - 63.5
Ours VGG-16 53.3 66.1 66.6 67.1 63.3 53.5 68.3 68.2 70.1 65.0
Ours VGG-16 50.0 63.1 63.5 63.4 60.0 50.3 65.2 65.2 65.5 61.6
A-MCG [AttantionMCG] ResNet-101 - - - - 52.0 - - - - 54.7
Ours ResNet-101 52.2 59.5 61.5 61.4 58.6 51.5 65.6 65.7 64.7 61.9
Ours ResNet-101 51.6 65.9 66.6 66.0 63.0 52.3 70.0 69.5 71.3 65.8
TABLE III: Class mIoU / FB-IoU results on COCO. Models with are evaluated on the labels resized to a fixed training crop size (473 for our models). Models without are tested on labels with the original sizes.
Methods 1-Shot 5-Shot
Fold-0 Fold-1 Fold-2 Fold-3 Mean Fold-0 Fold-1 Fold-2 Fold-3 Mean
W/O 60.5 68.4 55.4 54.9 59.8 62.8 68.9 55.6 56.5 61.0
TD 61.7 69.5 55.4 56.3 60.8 63.1 70.7 55.8 57.9 61.9
BU 62.4 69.2 53.9 55.9 60.4 63.1 70.1 53.7 56.0 60.7
TD+BU 61.0 69.7 55.6 57.0 60.8 62.4 70.4 56.4 58.9 62.0
BU+TD 61.0 68.9 54.8 56.0 60.2 62.4 69.8 54.5 56.7 60.8
TABLE IV: Class mIoU results of different ways for inter-scale interaction on PASCAL-5. All models in this table are based on ResNet-50 and are trained and tested with prior masks. W/O: FEM without the information path for inter-scale interaction. TD: FEM with top-down information path. BU: FEM with bottom-up information path. TD+BU: FEM with top-down + bottom-up information path. BU+TD: FEM with bottom-up + top-down information path.
Methods 1-Shot 5-Shot
Fold-0 Fold-1 Fold-2 Fold-3 Mean Fold-0 Fold-1 Fold-2 Fold-3 Mean
{60} (Baseline) 54.3 67.3 53.3 50.4 56.3 57.1 68.0 53.8 52.9 58.0
{60} + PPM [pspnet] 55.4 68.4 53.2 51.4 57.1 58.3 68.9 53.5 50.8 57.9
{60} + ASPP [aspp] 57.6 68.4 52.8 49.0 56.9 59.5 69.3 52.6 50.7 58.0
{60, 6, 3, 2, 1} 58.8 68.0 54.1 51.2 58.0 59.8 68.4 53.8 52.1 58.5
{60, 30} 55.3 67.8 54.7 51.2 57.3 58.4 68.7 54.5 53.1 58.7
{60, 30, 15} 56.6 68.0 54.6 52.9 58.0 59.0 68.7 55.0 54.0 59.2
{60, 30, 15, 8} 59.4 68.9 54.7 53.6 59.2 61.5 69.5 55.4 55.3 60.4
{60, 30, 15, 8, 4} 58.7 68.5 54.1 54.5 58.9 60.3 69.3 54.9 56.4 60.2
{60, 30, 15, 8}-WO 57.9 67.4 53.7 53.6 58.2 60.5 68.0 54.2 53.8 59.1
TABLE V: Class mIoU of FEM with different spatial sizes and the comparison with PPM [pspnet] and ASPP [aspp] on PASCAL-5. The backbone is ResNet-50. ‘’: the input query feature is average-pooled into four scales and concatenate with the expanded support features respectively as shown in Figure 4. WO: without inter-scale interaction.

4.2 Results

As shown in Tables I, II and III, we build our models on three backbones VGG-16, ResNet-50 and ResNet-101 and report the mIoU/FB-IoU results respectively. By incorporating the proposed prior mask and FEM, our model significantly outperforms previous methods, reaching new state-of-the-art on both PASCAL-5 and COCO datasets. The PFENet can even outperform other methods on COCO with more than 10 points in terms of class mIoU. Our performance advantage on FB-IoU compared to PANet is relatively smaller than class mIoU on COCO, because FB-IoU is biased towards the background and classes that cover a large part of the foreground area. It is worth noting that our PFENet achieves the best performance with the fewest learnable parameters (10.4M for VGG based model and 10.8M for ResNet based models). Qualitative results are shown in Figure 8.

4.3 Ablation Study of FEM  

The proposed feature enrichment module (FEM) adaptively enriches the query feature by merging with support features in different scales and utilizes an inter-scale path to vertically transfer useful information from the auxiliary features to the main features. To verify the effectiveness of FEM, we first compare different strategies for inter-scale interaction. It shows that the top-down information path brings a decent performance gain to the baseline without compromising the model size much. Then experiments with different designs for inter-source enrichment are presented followed by comparison with the other feature enrichment designs of HRNet [hrnet_pami], ASPP [aspp] and PPM [pspnet]. We also compare the Graph Attention Unit (GAU) used in the recent state-of-the-art few-shot segmentation method PGNet [Zhang_2019_ICCV] to refine the query feature. In these experiments, since our input images are resized to 473 473, the input feature map of the module (e.g., FEM, GAU) has the spatial size 60 60.

4.3.1 Inter-Scale Interaction Strategies

In this section, we show experimental results and analysis on different vertical inter-scale interaction strategies to manifest the rationales behind our designs of FEM.

As mentioned in Section 3.3, there are four alternatives for the inter-scale interaction: top-down path (TD), bottom-up path (BU), top-down + bottom-up path (TD+BU), and bottom-up + top-down path (BU+TD). Our experimental results in Table IV show that TD and TD+BU help the basic FEM structure without (W/O) the information path accomplish better results than both BU and BU+TD. The model with TD+BU contains more learnable parameters (16.0M) than TD (10.8M), and yet yields comparable performance. We thus choose TD for inter-scale interaction.

These experiments prove that using the finer feature (auxiliary) to provide additional information to the coarse feature (main) is more effective than using the coarse feature (auxiliary) to refine the finer feature (main). It is because the coarse features are not sufficient for targeting the query classes during the later information concentration stage if the target object disappears in small scales.

Different from common semantic segmentation where contextual information is the key for good performance, the way of representation and acquisition of query information is more important in few-shot segmentation. Our motivation for designing FEM is to match the query and support features in different scales to tackle the spatial inconsistency between the query and support samples. Thus, a down-sampled coarse query feature without target information is less helpful for improving the quality of the final prediction as shown in the experiments comparing TD and BU.

4.3.2 Comparison with Other Designs

PPM [pspnet] and ASPP [aspp] are two popular feature enrichment modules for semantic segmentation by providing multi-resolution context, and HRNet [hrnet_pami, hrnet, hrnet_arxiv] provides a new feature enrichment module for the segmentation task – it achieved SOTA results on semantic segmentation benchmarks. In few-shot segmentation, the Graph Attention Unit (GAU) has been used in PGNet [Zhang_2019_ICCV] to refine the query feature with contextual information. We note the proposed FEM module yields even better few-shot segmentation performance.

The improvement brought by FEM stems from: 1) the fusions of query and support features in different spatial sizes (inter-source enrichment) since it encourages the following convolution blocks to process the concatenated features independently in different spatial resolutions, which is beneficial to predicting query targets in various scales; 2) the inter-scale interaction that selectively passes useful information from the auxiliary feature to supplement the main feature. The model without the vertical top-down information path (marked with WO) yields worse results in Table V.

We implement the ASPP with dilation rates and it achieves close results to PPM. The dilated convolution is less effective than adaptive average pooling for few-shot segmentation [Zhang_2019_ICCV]. In the following, we mainly make comparisons with PPM and GAU first since they both use the adaptive pooling to provide multi-scale information. Then, we make a discussion with the module proposed by HRNet.

Pyramid Pooling Module (PPM)   As shown in Table V, the model with spatial sizes achieves better performance than the baseline (original size with spatial size ) and models that replace FEM with PPM and ASPP. Experiments of PSPNet [pspnet] show that the Pyramid Pooling Module (PPM) with spatial sizes yields the best performance. When small spatial sizes are applied to FEM, it still outperforms PPM. But small spatial sizes are not optimal in FEM because the features pooled to spatial sizes like are too coarse for interaction and fusion of query and support features. Similarly, with small spatial size 4, the FEM with yields inferior performance compared to using the model with spatial sizes . Hence, we select as the feature scales for the inter-source enrichment of FEM.

Graph Attention Unit (GAU)   GAU [Zhang_2019_ICCV] uses the graph attention mechanism to establish the element-to-element correspondence between the query and support features in each scale. Pixels of the support feature are weighed by the GAU and the new support feature is the weighted sum of the original support feature. Then the new support feature is concatenated with the query feature for further processing.

We directly replace the FEM with GAU on our baseline and keep other settings for a fair comparison. GAU is implemented with the code provided by the authors. Our baseline with GAU achieves class mIoU 55.4 and 56.1 in 1- and 5-shot evaluation respectively. Noticing the original feature scales in GAU are , we also implement it with scales (denoted as GAU+) used in our FEM. GAU+ yields smaller mIoU than GAU (54.9 in 1-shot and 55.4 in 5-shot). Though GAU also forms a pyramid structure via adaptive pooling to capture the multi-level semantic information, it is less competitive than the proposed FEM (59.2 in 1-shot and 60.4 in 5-shot) because it misses the hierarchical inter-scale relationship that adaptively provides information extracted from other levels to help refine the merged feature.

High-Resolution Network (HRNet)   HRNet has shown its superiority on many vision tasks by maintaining a high-resolution feature through all the networks and gradually fusing multi-scale features to enrich the high-resolution features. The proposed FEM can be deemed as a variant of HRB to tackle the few-shot segmentation problem. The inter-source enrichment of FEM is analogous to the multi-resolution parallel convolution in HRB as shown in Figure 9. But the inter-scale interaction in FEM passes conditioned information from large to small scales rather than dense interaction among all scales without selection in HRB.

For comparison, we experiment with replacing the FEM in PFENet with HRB and generate feature maps in HRB with the same scales of those in FEM (). Results are listed in Table VI. Directly applying HRB to the baseline (Baseline + HRB) does yield better results than PPM and ASPP. Densely passing information without selection causes redundancy to the target feature and yields suboptimal results. Our solution is, in the multi-resolution fusion stage of HRB, to apply the proposed inter-scale merging module to extract essential information from the auxiliary features as shown in Figure 10. The model with conditioned feature selection (HRB-Cond) accomplishes better performance.

As shown in Table IV, passing features from coarse to fine levels (in a bottom-up order) adversely affects inter-scale interaction. We accordingly remove all bottom-up paths in HRB and only allow top-down ones (denoted as HRB-TD). It is not surprising that HRB-TD achieves better performance than HRB, and adding conditioned feature selection (HRB-TD-Cond) brings even further improvement.

The best variant of HRB (i.e., HRB-TD-Cond) yields comparable results with FEM, and yet it brings much more learnable parameters (7.5M). Therefore, for few-shot segmentation, the conditioned feature selection mechanism of the proposed inter-scale merging module is essential for improving the performance of the multi-resolution structures.

Fig. 9: Modularized block of HRNet (HRB) that applies dense multi-resolution fusions.
Fig. 10: Comparison between feature fusion strategies of (left) HRB and (right) HRB-Cond. Features from different scales are directly added to the main feature in (left), while in (right), essential information is selected from auxiliary features conditioned on the main features by the inter-scale merging module .
Methods 1-Shot 5-Shot Params Speed
Baseline 56.3 58.0 4.5 M 17.7 FPS
Baseline + PPM [pspnet] 57.1 57.9 5.7 M 17.6 FPS
Baseline + ASPP [aspp] 56.9 58.0 7.9 M 17.5 FPS
Baseline + HRB [hrnet_pami] 58.3 59.4 14.4 M 15.7 FPS
Baseline + HRB-Cond 59.2 60.0 23.0 M 14.5 FPS
Baseline + HRB-TD 58.9 60.0 14.0 M 16.1 FPS
Baseline + HRB-TD-Cond 59.3 60.4 18.3 M 15.6 FPS
Baseline + FEM 59.2 60.4 10.8 M 17.3 FPS
Baseline + FEM 58.9 60.2 12.9 M 16.1 FPS
Baseline + Prior 58.2 59.6 4.5 M 16.5 FPS
Baseline + FEM + Prior 60.8 61.9 10.8 M 15.9 FPS
Baseline 48.8 50.1 28.2 M 17.7 FPS
Baseline + FEM 50.2 52.3 34.5 M 16.1 FPS
Baseline + Prior 49.7 53.1 28.2 M 16.5 FPS
Baseline + FEM + Prior 51.9 55.3 34.5 M 15.9 FPS
TABLE VI: Class mIoU on PASCAL-5 and efficiency of models with/without the proposed prior and FEM. Models are based on ResNet-50. Params: The number of learnable parameters. Speed: Average frame-per-second (FPS) of 1-shot evaluation. HRB: Modularized block of HRNet [hrnet_pami]. -TD: Only top-down feature enrichment paths are enabled. -Cond: The inter-scale enrichment modules are implemented to pass the conditioned information. FEM: Feature enrichment module with . FEM: FEM with spatial sizes . Prior: Prior masks got by fixed high-level features (conv5_x). Baseline: Models trained with all backbone parameters. Prior: Prior masks got by learnable high-level features.
(a) Input (b) GT (c) L-M (d) L-H (e) F-M (f) F-H
Fig. 11: Visual comparison between priors generated by different sources. Prior values are normalized to 0-1, which implies the probability of being the target region. GT: Ground truth. L-M: Learnable middle-level features. L-H: Learnable high-level features. F-M: Fixed middle-level features. F-H: Fixed high-level features.
(a) L-M (b) L-H (c) F-M (d) F-H
Fig. 12: Visual comparison between t-SNE results of different feature sources. 1,000 features in gray color are from base classes and 1,000 features in other colors are from novel classes. L-M: Learnable middle-level features. L-H: Learnable high-level features. F-M: Fixed middle-level features. F-H: Fixed high-level features.
Methods 1-Shot 5-Shot
Fold-0 Fold-1 Fold-2 Fold-3 Mean Fold-0 Fold-1 Fold-2 Fold-3 Mean
Baseline 49.4 64.6 53.3 46.0 53.3 51.5 65.5 52.5 47.0 54.1
Baseline + Prior 50.3 54.5 53.0 46.2 53.5 51.9 65.7 52.9 47.2 54.4
Baseline + Prior 37.8 60.8 53.5 43.4 48.9 42.5 64.2 57.8 47.6 53.0
Baseline + Prior 51.2 64.4 53.9 45.7 53.8 52.8 65.1 53.2 47.5 54.7
Baseline + Prior 53.5 65.6 53.6 48.8 55.4 55.7 66.4 53.8 49.8 56.4
Baseline + Prior-A 52.2 65.4 54.5 48.5 55.1 54.8 66.0 54.3 50.2 56.3
Baseline + Prior-P 52.4 65.8 53.1 47.6 54.7 54.9 67.0 53.5 48.8 56.1
Baseline + Prior-FW 50.6 64.9 52.4 42.9 52.7 53.4 65.5 51.7 43.2 53.5
Baseline + Prior-FW 37.5 60.3 54.8 43.9 49.1 44.2 62.8 58.5 47.0 53.1
Baseline + Prior-FW 50.6 64.7 54.4 47.0 54.2 52.5 65.4 53.7 47.8 54.9
Baseline + Prior-FW 51.0 65.1 53.9 48.8 54.7 52.7 66.1 53.8 50.4 55.8
TABLE VII: Class mIoU results of different prior masks on PASCAL-5. All models in this table are based on VGG-16. LM: Learnable middle-level features. LH: Learnable high-level features. FM: Fixed middle-level features. FH: Fixed high-level features. Prior: Prior mask got by taking the maximum similarity value. Prior-A: Prior mask got by the average similarity value. Prior-P: Prior mask generated with the mask-pooled support feature. Prior-FW: Prior mask got by the feature weighting mechanism proposed in [weighingboosting].

4.4 Ablation Study of the Prior Generation  

Experimental results in Table VI show that the prior improves models w/ and wo/ FEM. The cosine-similarity is widely used for tackling few-shot segmentation. PANet [panet] uses the cosine-similarity to yield the intermediate and the final prediction masks; SG-One [SG-One] and [weighingboosting] both utilize the cosine-similarity mask from the mask pooled support feature to provide additional guidance. However, these methods overlooked two factors. First, the mask generation process contains trainable components and the generated mask is thus biased towards the base classes during training.

Second, the discrimination loss is led by the masked average pooling on support features, since the most relevant information in the support feature may be overwhelmed by the irrelevant ones during the pooling operation. For example, the discriminative regions for “cat & dog” are mainly around their heads. The main bodies share similar characteristics (e.g., tailed quadrupeds), making representation produced by masked global average pooling lose the discriminative information contained in the support samples.

In the following, we first show the rationale behind our prior generation using the fixed high-level feature and taking the maximum pixel-wise correspondence value from the similarity matrix. Then we make a comparison with other methods to demonstrate the superiority of our strategy. We also include the analysis of the generalization ability on the unseen objects out of the ImageNet [imagenet] dataset to further manifest the robustness of our method.

4.4.1 Feature Selection  

In our design, we select the fixed high-level feature for the prior generation because it can provide sufficient semantic information for accurate segmentation without sacrificing the generalization ability. The proposed prior generation is independent of the training process. So it does not lead to loss of generalization power. The prior masks provide the bias-free prior information from high-level features for both seen and unseen data during the evaluation, while masks produced by learnable feature maps (e.g., [panet, SG-One, weighingboosting]) are affected by parameter learning during training. As a result, the preference for the training classes is inevitable for these later masks during the inference. To show the superiority of our choice, we conduct experiments on different sources of features for generating prior masks.

Quantitative Analysis   Table VII shows that the mask generated by either learnable or fixed middle-level features (Prior or Prior) is less improved than our Prior since the middle-level feature is less effective to reveal the semantic correspondence between the query and support features. However, the results of mask got by learnable high-level feature (Prior) are even significantly worse than that of our baseline due to the fact that the learnable high-level feature severely overfits to the base classes: the model relies on the accurate prior masks produced by the learnable high-level feature for locating the target region of base classes during training and therefore it hardly generalizes to the previously unseen classes during inference.

Qualitative Analysis   Generated prior masks are shown in Figure 11. Masks of unseen classes generated by learnable high-level feature maps (L-H) cannot reveal the potential region-of-interest clearly while using the fixed high-level feature maps (F-H) keeps the general integrity of the target region. Compared to high-level features, prior masks produced by middle-level ones (L-M and F-M) are more biased towards the background region.

To help explain the quantitative results and those in Figure 11, embedding visualization is shown in Figure 12 where 1,000 samples of base classes (gray) and 1,000 samples of novel classes (colored in green, red, purple, blue and orange) are processed by the backbone followed by t-SNE [tsne]. Based on the overlapping area between the clusters of the base and novel classes, we draw two conclusions. First, the middle-level features in Figures 12 (a) & (c) are less discriminative than the high-level features as shown in Figure 12(b) & (d). Second, learnable features lose discrimination ability as shown in (a) & (b) because embeddings of novel classes bias towards that of the base classes, which is detrimental to the generalization on unseen classes.

4.4.2 Discrimination Ability  

In our model, the prior mask acts as a pixel-wise indicator for each query image. As given in Eq. (3), taking the maximum correspondence value from the pixel-wise similarity between the query and support features indicates that there exists at least one pixel/area in the support image that has close semantic relation to the query pixel with a high prior value. It is beneficial to reveal most of the potential targets on query images. Other alternatives include using mask pooled support feature to generate the similarity mask as [panet, SG-One, weighingboosting], and taking the average value rather than the maximum value from the pixel-wise similarity.

To verify the effectiveness of our design, we train two additional models in Table VII: one with prior masks generated by averaging similarities (Prior-A), and another whose prior masks are obtained by the mask-pooled support feature (Prior-P). They both perform less satisfyingly than the proposed strategy (Prior).

We note the following fact. Our prior generation method takes the maximum value from a similarity matrix of size to generate the prior mask of size (Eq. (3)), in contrast to Prior-P forming the mask from the similarity matrix of size , the difference of speed is rather small because computational complexities of the two mask generation methods are much smaller than that of the rest of network. The FPS values of Prior, Prior-A, Prior-P and Prior-FW based on VGG-16 baseline are both around 23.1 FPS because the output features only contain 512 channels. The FPS values of Prior, Prior-A, Prior-P and Prior-FW based on ResNet-50 baseline whose output features have 2048 channels are 16.5, 16.5, 17.4 and 17.0 respectively.

4.4.3 Comparison with Other Designs  

Some other methods also use the similarity mask as an intermediate guidance for improving performance (e.g. [panet], [SG-One], [weighingboosting]). Their masks are obtained by the learnable mask-pooled support and learnable query feature that is then used for further processing the making final prediction. The strategy of this type of method is similar to Prior-P.

In [weighingboosting], the good discrimination ability of features makes activation high on the foreground and low elsewhere. We follow Eqs. (3)-(6) in [weighingboosting] to implement the feature weighting mechanism on both the query and support features used for prior mask generation. In [weighingboosting], the weighting mechanism is directly applied to learnable features, and we offer two choices in our model: the learnable middle- and high-level features. However, it does not perform better for Prior-FW and Prior-FW. Results of Prior-FW demonstrates the effectiveness of our feature selection strategy (with fixed high-level features) for prior generation. Our feature selection strategy is complementary to the weighting mechanism of [weighingboosting].

4.4.4 Generalization on Totally Unseen Objects  

Many objects of PASCAL- and COCO have been included in ImageNet [imagenet] for backbone pre-training. For those previously unseen objects, the backbone still provides strong semantic cues to help identify the target area in query images with the information provided by the support images. The class ‘Person’ in PASCAL-5 is not contained in ImageNet, and the baseline with the prior mask achieves 15.81 IoU, better than that without the prior mask (14.38). However, the class ‘Person’ is not rare in ImageNet samples even if their labels are not ‘Person’.

To further demonstrate our generalization ability to totally unseen objects, we conduct experiments on the recently proposed FSS-1000 [FSS-1000] dataset where the foreground IoU is used as the evaluation metric. FSS-1000 is composed of 1,000 classes, among which 486 classes are not included in any other existing datasets [FSS-1000] 111In practice, 420 unseen classes are filtered out. The author of FSS-1000 has clarified in email that they ”have made incremental changes to the dataset to improve class balance and label quality so the number may have changed. Please do experiments according to the current version.”. We train our models with ResNet-50 backbone on the seen classes for 100 epochs with batch size 16 and initial learning rate 0.01, and then test them on the unseen classes. The number of query-support pairs sampled for testing is equal to five times the number of unseen samples.

As shown in Table VIII, the baseline with the prior mask achieves 80.8 and 81.4 foreground IoU in 1- and 5-shot evaluations respectively that outperform the vanilla baseline (79.7 and 80.1) by more than 1.0 foreground IoU in both settings. The visual illustration is given in Figure 13 where the target regions can still be highlighted in the prior masks even if these objects were not witnessed by the ImageNet pre-trained backbone.

Fig. 13: Visual illustrations of prior masks for totally unseen objects in FSS-1000 dataset. Top: support images with the masked area in the target class. Middle: query images. Bottom: prior masks of query images where the regions of interest are highlighted.
Methods 1-Shot 5-Shot
Baseline 79.7 80.1
Baseline + Prior 80.8 81.4
TABLE VIII: Foreground IoU results on totally unseen classes of FSS-1000 [FSS-1000].

4.5 Backbone Training  

In OSLSM [shaban], two backbone networks are trained to achieve few-shot segmentation. However, backbone parameters in recent work [canet, panet] are kept to prevent overfitting. There is no experiment to show what effect the backbone training has. To reach a better understanding of how the backbone affects our method, the results of four models trained with all parameters in the backbone are shown in the last four rows of Table VI.

The additional trainable backbone parameters cause significant performance reduction due to the overfitting of training classes. Moreover, the backbone training nearly doubles the training time of each batch because an additional parameter update is required. It does not, however, affect the inference speed. As shown in the results, the improvement that FEM and prior mask bring to models with trainable backbones is less significant than on those with fixed backbones. We note that the prior masks in this section are produced by learnable high-level features because the whole backbone is trainable. The learnable high-level features bring worse performance to the fixed backbone as shown in Table VII, but they are beneficial to the trainable backbone. On 5-shot evaluation, the prior yields higher performance gain compared to FEM, because the prior is averaged over five support samples, providing a more accurate prior mask than 1-shot for query images to combat overfitting. Finally, the model with both FEM and the prior still outperforms the baseline model, which demonstrates the robustness of our proposed design even with all learnable parameters.

4.6 Model Efficiency

Parameters   The parameters of our backbone network are fixed as those in [panet, canet, Zhang_2019_ICCV]. Four parts in the baseline model are learnable: two 11 convolutions for reducing dimension number of the query and support features, FEM, one convolution block and one classification head. As shown in Table VI, our best model (Baseline + FEM + Prior) only has 10.8M trainable parameters that are much fewer than other methods shown in Table I. The prior generation does not bring additional parameters to the model, and FEM with spatial sizes only brings 6.3M additional learnable parameters to the baseline (4.5M 10.8M). To prove that the improvement brought by FEM is not due to more learnable parameters, we show results of the model with FEM that has more parameters (12.9M) but it yields even worse results than FEM (10.8M).

Speed   PFENet based on ResNet-50 yields the best performance with 15.9 and 5.1 FPS in 1- and 5-shot setting respectively on an NVIDIA Titan V GPU. During evaluation, test images are resized to 473 473. As shown in Table VI, FEM does not affect the inference speed much (from 17.7 to 17.3 FPS). Though the proposed prior generation process slows down the baseline from 17.7 to 16.5 FPS, the final model is still efficient with 15+ FPS. Note that we include the processing time of the last block of ResNet in these experiments for a fair comparison.

Fold - Shot 1 2 3 4 5 Mean Std.
F0 - S1 61.1 61.9 62.2 61.6 61.7 61.7 0.406
F0 - S5 63.1 63.2 63.3 63.1 63.3 63.1 0.148
F1 - S1 69.5 69.7 69.1 69.5 69.7 69.5 0.245
F1 - S5 70.7 70.8 70.9 70.6 70.5 70.7 0.158
F2 - S1 55.3 55.2 55.6 55.4 55.1 55.4 0.230
F2 - S5 55.2 56.3 55.5 55.9 56.0 55.8 0.432
F3 - S1 56.0 56.2 56.2 56.7 56.3 56.3 0.259
F3 - S5 57.9 58.1 57.9 58.0 57.6 57.9 0.187
TABLE IX: Mean and Std. of five test results (class mIoU) on PASCAL-5

. ‘Fm - Sn’ means the n-shot results of Fold-m. Each row shows five test results with the values of mean and standard deviation (Std.).

Folds Pairs 1-Shot 5-Shot
Test-1 Test-2 Test-3 Test-4 Test-5 Mean Std Test-1 Test-2 Test-3 Test-4 Test-5 Mean Std
Fold-0 1,000 35.0 36.8 35.4 37.8 34.6 35.9 1.339 38.6 35.5 37.8 38.4 38.9 37.8 1.369
Fold-0 20,000 35.5 35.6 35.5 35.3 35.2 35.4 0.164 38.2 37.7 38.4 38.5 38.2 38.2 0.308
Fold-1 1,000 36.4 36.9 38.9 34.7 36.1 36.6 1.523 41.8 41.8 39.2 42.8 40.1 41.4 1.455
Fold-1 20,000 38.3 37.8 38.2 38.2 38.2 38.1 0.195 42.2 42.6 42.3 42.6 42.8 42.5 0.245
Fold-2 1,000 36.9 35.1 37.0 34.3 32.0 35.1 2.067 39.8 40.8 41.7 40.4 38.3 40.2 1.267
Fold-2 20,000 37.0 36.4 36.9 36.4 37.2 36.8 0.363 42.1 41.5 41.9 41.6 41.8 41.8 0.239
Fold-3 1,000 34.9 35.1 36.5 34.7 35.9 35.4 0.756 38.2 38.4 39.5 36.9 38.7 38.3 0.945
Fold-3 20,000 34.6 34.8 34.5 34.6 34.9 34.7 0.164 38.8 38.8 39.2 38.9 38.7 38.9 0.192
TABLE X: Analysis on values of mean and std. of five test results (class mIoU) on COCO with different numbers of test query-support pairs (1,000 and 20,000). The model is based on VGG-16 [vgg]. 20,000 query-support pairs yield more stable results with a lower standard deviation than 1,000 query-support pairs. .

4.7 Analysis on Result Stability

As mentioned in the implementation details, evaluating 1,000 query-support pairs on PASCAL-5 and COCO may cause instability on results. In this section, we show the analysis of result stability by conducting multiple experiments with different support samples.

PASCAL-5   Results in Table IX show that the values of standard deviation are lower than 0.5 in both 1-shot and 5-shot setting, which shows the stability of our results on PASCAL-5 with 1,000 pairs for evaluation.

COCO   However, 1,000 pairs are not sufficient to provide reliable results for comparison as shown in Table X, since the COCO validation set contains 40,137 images and 1,000 pairs could not even cover the entire 20 test classes. Based on this observation, we instead randomly sample 20,000 query-support pairs to evaluate our models on four folds, and the results in Table X show that 20,000 pairs bring much more stable results than 1,000 pairs.

4.8 Extension to Zero-Shot Segmentation

Zero-shot learning aims at learning a model that is robust even when no labeled data is given. It is an extreme case of few-shot learning. To further demonstrate the robustness of our proposed PFENet in the extreme case, we modify our model by replacing the pooled support features with class label embeddings. Note that our proposed prior generation method requires support features. Therefore the prior is not applicable and we only verify FEM on the baseline with VGG-16 backbone in the zero-shot setting.

Methods Shot Fold-0 Fold-1 Fold-2 Fold-3 Mean
OSLSM [shaban] 5 35.9 58.1 42.7 39.1 44.0
co-FCN [co-FCN] 5 37.5 50.0 44.1 33.9 41.4
SG-One [SG-One] 5 41.9 58.6 48.6 39.4 47.1
AMP [adaptivemaskweightimprinting] 5 41.8 55.5 50.3 39.9 46.9
Kato et al. [zeroshot_Kato_2019_ICCV_Workshops] 0 39.6 52.6 41.0 35.6 42.2
Baseline 0 49.4 67.1 50.3 46.0 53.2
Baseline + FEM 0 50.0 68.5 51.7 46.6 54.2
TABLE XI: Experimental results in the zero-shot setting. Models shown in this table are based on VGG-16.

Structural Change   Embeddings of Word2Vec [w2c] and FastText [fasttext] are trained on Google News [googlenews] and Common Crawl [crawl] respectively. The concatenated feature of Word2Vec and FastText embeddings directly replaces the pooled support feature in the original model without normalization. Therefore the structural change on the model structure is the first learnable 11 convolution for reducing the support feature channel. Its input channel number 768 (512 + 256) in the original few-shot model (VGG-16 backbone) is updated to 600 (300 + 300) in the zero-shot model.

Results   As shown in Table XI, our base structure achieves 53.2 class mIoU on unseen classes without support samples, which even outperforms some models with five support samples on PASCAL-5 in the few-shot setting of OSLSM [shaban]. Also, the proposed FEM tackles the spatial inconsistency in the zero-shot setting and brings 1.0 points mIoU improvement (from 53.2 to 54.2) to the baseline.

5 Conclusion

We have presented the prior guided feature enrichment network (PFENet) with the proposed prior generation method and the feature enrichment module (FEM). The prior generation method boosts the performance by leveraging the cosine-similarity calculation on pre-trained high-level features. The prior mask encourages the model to localize the query target better without losing generalization power. FEM helps solve the spatial inconsistency by adaptively merging the query and support features at multiple scales with intermediate supervision and conditioned feature selection. With these modules, PFENet achieves new state-of-the-art results on both PASCAL-5 and COCO datasets without much model size increase and notable efficiency loss. Experiments in the zero-shot scenario further demonstrate the robustness of our work. Possible future work includes extending these two designs to few-shot object detection and few-shot instance segmentation.