DeepAI
Log In Sign Up

Contrastive Enhancement Using Latent Prototype for Few-Shot Segmentation

03/08/2022
by   Xiaoyu Zhao, et al.
0

Few-shot segmentation enables the model to recognize unseen classes with few annotated examples. Most existing methods adopt prototype learning architecture, where support prototype vectors are expanded and concatenated with query features to perform conditional segmentation. However, such framework potentially focuses more on query features while may neglect the similarity between support and query features. This paper proposes a contrastive enhancement approach using latent prototypes to leverage latent classes and raise the utilization of similarity information between prototype and query features. Specifically, a latent prototype sampling module is proposed to generate pseudo-mask and novel prototypes based on features similarity. The module conveniently conducts end-to-end learning and has no strong dependence on clustering numbers like cluster-based method. Besides, a contrastive enhancement module is developed to drive models to provide different predictions with the same query features. Our method can be used as an auxiliary module to flexibly integrate into other baselines for a better segmentation performance. Extensive experiments show our approach remarkably improves the performance of state-of-the-art methods for 1-shot and 5-shot segmentation, especially outperforming baseline by 5.9 task on Pascal-5^i and COCO-20^i. Source code is available at https://github.com/zhaoxiaoyu1995/CELP-Pytorch

READ FULL TEXT VIEW PDF

page 2

page 6

page 13

06/20/2022

MSANet: Multi-Similarity and Attention Guidance for Boosting Few-Shot Segmentation

Few-shot segmentation aims to segment unseen-class objects given only a ...
10/03/2022

Few-Shot Segmentation via Rich Prototype Generation and Recurrent Prediction Enhancement

Prototype learning and decoder construction are the keys for few-shot se...
04/22/2022

Dynamic Prototype Convolution Network for Few-Shot Semantic Segmentation

The key challenge for few-shot semantic segmentation (FSS) is how to tai...
08/04/2020

Prior Guided Feature Enrichment Network for Few-Shot Segmentation

State-of-the-art semantic segmentation methods require sufficient labele...
04/21/2022

Beyond the Prototype: Divide-and-conquer Proxies for Few-shot Segmentation

Few-shot segmentation, which aims to segment unseen-class objects given ...
08/14/2021

A Self-Distillation Embedded Supervised Affinity Attention Model for Few-Shot Segmentation

Few-shot semantic segmentation is a challenging task of predicting objec...
07/23/2022

Self-Support Few-Shot Semantic Segmentation

Existing few-shot segmentation methods have achieved great progress base...

1 Introduction

Deep learning-based segmentation methods have achieved state-of-the-art in various image segmentation tasks, benefiting from large pixel-level annotated datasets and advances in deep neural networks [10, 11]. However, available labeled samples for the segmentation task are usually limited since pixel-level annotation is expensive and time-consuming. Specifically, the performance of fully-supervised approaches drops dramatically for tasks without sufficient annotated data. Though semi-supervised methods succeed in leveraging unlabeled samples as the complement of annotated data, segmentation assisting with a few or even one annotated sample is extremely difficult. In particular, these methods usually cannot be generalized to unseen classes.

Few-shot segmentation [37, 43, 40, 28] aims to segment novel objects in a query image using few annotated support images. Existing methods mostly leverage representations from annotated support images and the similarity between the representations and support features for dense predictions. Additionally, prototype learning architecture is a milestone, which extracts single or multi-prototype vectors from support images to represent object information. For example, masked Global Average Pooling (GAP) [45] is mostly performed to extract prototypes from support features. Some methods adopt EM algorithm and cluster to generate multi-prototype to represent more support information. Then the similarity between the prototype vectors and query features is computed using cosine distance [43] or dense comparison [40] for segmentation.

Figure 1: The motivation of our method. In the prototype learning architecture with conditional segmentation, decoder may neglect the similarity between prototype and query features, and overly concentrate on query features for highly accurate prediction in known categories. It may cause segmentation on incorrect objects. Our approach aims to mine latent prototypes and enhance the model to make different predictions with various prototypes.

As a learnable pattern, conditional segmentation by dense comparison is generally superior to cosine distance in prediction accuracy. However, we discover that conditional segmentation sometimes neglects the similarity between prototype and query features, causing incorrect objects segmentation. As shown in Fig. 1, the support and query images contain the objects of “Person” and “Bicycle”, and “Person” is unseen and novel to the model. Though the prototype represented “Person” is extracted by GAP for query, the decoder mistakenly segments the object of “Bicycle”. However, there exists a high similarity between prototype vector and query features in the “Person” object regions. We suppose that decoder overly concentrates on query features for highly accurate prediction in known categories, neglecting the similarity between prototype and query features, especially when the unseen category has a larger difference with existing categories. Some prior works have tried to address such a problem. PFENet [28] proposed a training-free prior mask generation method to introduce high-level semantic information. CyCTR [41] designed a Transformer architecture with a cycle-consistent mechanism to aggregate pixel-wise support features. HSNet [25] leverages multi-level feature correlation between support and query feature instead of the prototype learning architecture. However, these methods do not explicitly utilize background features and constraint conditional segmentation to avoid inconsistent object segmentation.

In this paper, we propose a contrastive approach with latent prototype to enhance decoder in conditional segmentation. Especially, we design a Latent Prototype Sampling (LPS) module to generate prototypes from background in query image. Our method is based on the hypothesis that high similarity regions in high-level features belong to the same category. Though the regions may not represent the whole object, decoder should segment part of objects employing the generated prototype from the sampling regions and suppress the known categories (e.g. the “Bicycle” in Fig. 1). Moreover, we design a Contrastive Enhancement (CE) module, which can be used as an auxiliary module to leverage latent prototypes to segment corresponding regions. Our module does not need extra parameters and is transferable since it can share parameters with the decoder module and can be easily applied in other methods to improve segmentation performance further. Extensive experiments show that our method significantly improve the performance of existing baseline methods.

In summary, the contributions of our work are as follows:

  • We propose a latent prototype sampling module to generate prototypes from background of query images. The module can mine the latent objects in the training images and produce novel prototypes from query features, which is directly utilized in contrastive enhancement of conditional segmentation.

  • We propose a contrastive enhancement module to reduce the over-concentration on query features and the neglect of feature similarity in dense comparison. The module is an auxiliary path of the existing decoder without additional parameters.

  • Our method is transferable and can be conveniently applied to existing methods. Experiments show that our approach achieves state-of-the-art performance on few-shot segmentation datasets. The proposed method respectively obtains mIoUs of 65.8%/40.2% for 1-shot, and mIoUs of 67.3%/45.3% for 5-shot on Pascal- and COCO-.

2 Related Works

2.1 Fully-Supervised Semantic Segmentation

Semantic segmentation is a challenging computer vision task to provide pixel-level predictions. Fully Convolutional Networks (FCNs)

[21] replaced fully connected layers with convolution layers, which is a milestone to promote end-to-end semantic segmentation with deep learning. Based on FCNs architecture, some semantic segmentation methods focus on multi-scale feature [3, 26, 13, 44], large receptive field [2, 4] and high-resolution predictions [1, 9]. Typically, DeepLab methods [2, 3, 4, 5] proposed dilated convolutions to obtain large receptive field and boost feature resolution in downsampling. In addition, Atrous Spatial Pyramid Pooling (ASPP) module is designed to aggregate multi-scale features. Different from FCNs, recent research shows transformer architectures [20] can also achieve state-of-the-art performance in semantic segmentation. Although these methods significantly improve the prediction accuracy of semantic segmentation, they need expensive and time-consuming pixel-level annotations for amounts of images. It is challenging for these methods when few labeled images can be obtained.

2.2 Few-Shot Learning

Few-shot learning aims to improve generalization performance of model with few labeled samples. The main research focuses on the data, model, and algorithm perspectives [32]. From perspective of data, data augmentation [42, 38]

generates diverse training samples to promote training, which is also a common technology in fully-supervised learning. Meta-learning

[8, 15] is the representative algorithm-based method, targeting to learn from various tasks for better generalization in the new task. Metric learning methods are important research lines from the model perspective. For the few-shot classification task, Prototypical Network [27] calculated similarity with prototype representation of each class, and highest similarity represents the same pairs. This kind of method focuses on better prototype generation and similarity measurement. Matching Network [29]

utilized external memory to store knowledge related to tasks and adopts an attention mechanism to read and update memory modules. Using cosine similarity as distance criterion, BD-CSPN

[18] proposed label propagation and feature shifting to diminish the intra-class and inter-class bias. Our work follows the research of metric learning, specifically the prototype learning framework in few-shot segmentation task. Few-shot segmentation is a pixel-level classification task with very few examples, which have gained extensive attention recently.

2.3 Few-Shot Segmentation

Few-Shot Segmentation methods are mainly based on metric learning and extend from few-shot image classification methods. [6] early formulated n-way k-shot few-shot semantic segmentation and introduced the prototype learning framework. Most methods follow the idea that segmentation is provided according to the similarity between prototypes from support images and pixels in query images. SG-One [43] proposed masked GAP to generate prototypes from support images and adopted cosine similarity to perform prediction. CANet [40]

replaced cosine similarity with dense comparison, which upsamples the prototype vectors and concatenates them with query features to make conditional segmentation. Some works pointed out that multiple prototypes can represent more information than single prototype and proposed multiple prototype extraction methods, including expectation-maximization algorithm

[36], superpixel-guided clustering [13], and self-guided learning [39]. Leveraging information of high-level semantic and background regions is an important research trend. PFENet [28] generated prior masks from high-level features to guide predictions. To utilize backround region of training images, [37] proposed latent class mining strategy and rectification method for support prototypes. CyCTR [41] proposed cycle-consistent mechanism and integrated it into Transformer architecture, aiming to use the information of whole support features. Some works [30, 25] directly adopted pixel-to-pixel feature correlation between support and query images to make predictions. This paper proposes a contrastive enhancement method to leverage the high-level feature correlation and background query pixel features. The method enables to discover latent prototypes and regularize conditional segmentation. Besides, it is a flexible module to integrate into existing prototype-based methods, such as PFENet and CyCTR.

3 Problem Setting

Few-shot segmentation aims to train a model to segment unseen objects with few labeled images of the target category. Annotated images for these categories do not appear in the training dataset. Given the training dataset with classes set , and test dataset with classes set , test set is not overlap with train set, i.e. . Consistent with previous works [28, 40, 36], we preform episode training for few-shot segmentation. For K-shot segmentation, each episode is composed with a support set and a query set , where denotes the RGB image and represents mask. Images in the support set and query set contain the same category objects annotated in the segmentation mask. In each episode, the model learns to segment class in query images using the support images and support masks. Support-query paradigm is also conducted during testing, but only support masks are available for model input.

4 Methodology

4.1 Overview

Fig. 2 shows the overview of our proposed contrastive enhancement method using latent prototype. We adopt the episodic training framework on support-query pairs.

Initially, the framework employs a pre-trained backbone to extract the middle and high-level features of support and query images, which are generally the outputs of the middle and last blocks in VGG and ResNet. Following [28], the similarity between high-level support and query feature map is computed as prior masks to assist predictions. The middle-level support feature maps generate prototype vectors, which are expanded and concatenated with query feature maps and prior masks. The decoder, for instance FEM module [28] and cycle-consistent transformer [41], is to make dense predictions.

Furthermore, parallel to the above process, our framework consists of two modules: Latent Prototype Sampling (LPS) and Contrastive Enhancement (CE) modules. LPS module is designed to obtain the regions that belong to the same category in query images. The sampling strategy is established that regions of the same category have high similarity in high-level feature maps. The masked GAP and the generated pseudo-mask are employed to produce latent prototype.

Finally, CE module computes the similarity between latent prototype and middle-level query feature as prior mask in auxiliary path. The prior mask, latent prototype, and query feature are given to the decoder to predict the sampling regions, where the known objects as background and sampling regions as foreground. This module drives the same query feature combined with different prototype vectors segmenting different objects.

Figure 2: The pipeline of our method for 1-shot segmentation. M and H features represent the middle and last block outputs of backbone. The top part is the main path, which can be the prototype-based baseline methods. In the auxiliary path, query image regions with high similarity in high-level features are sampled to generate latent prototypes. The latent and support prototypes are respectively concatenated with the same query feature to make different predictions. The additional path enables to mine novel classes and constraint the model training.

4.2 Latent Prototype Sampling

Latent Prototype Sampling (LPS) module is proposed to mine latent stuff, some of which are the unseen foreground objects in labeled data. Supposing the query image and query mask

, the middle-level and high-level features extracted from the backbone are denoted

and . The cosine similarity between feature vectors at position and in high-level feature map is calculated by

(1)

where and are the height and width of the feature map . For each position of feature , the total number of feature vectors that has higher similarity with is recorded. Significantly, the feature that belongs to the foreground annotated in query mask is ignored because the module focus on mining unknown objects. The criterion of higher similarity is controlled through hyper-parameter by

(2)

We expect to sample regions based on the central point, which is distinctly similar to surrounding points. Then, the index set of the central feature vector can be constructed as follow:

(3)

where is a threshold to filter out isolated points with very few similar features. The chosen index is randomly selected from set . The regions containing and ’s similar features are supposed to be the same class. The known foreground in annotated query mask belongs to different categories with the sampling regions and is treated as background. Other regions that are not identified as the background and foreground are ignored in loss computation to avoid introducing mistakes. The mask generation is described as follow:

(4)
(5)
(6)

where , , and are respectively the foreground, ignore regions and generated mask. The sampling procedure can be conveniently conducted by matrix operations, hence the increase in computing time is negligible. We design the clear sampling methods rather than the existing clustering method because the numbers of clustering centers are hard to determine, especially for each query image. Unreliable clustering will bring label noise and damage the original training branch. Our approach is simple for implementation and efficient to conduct end-to-end training. The masked GAP takes the middle-level features and the generated mask to produce the latent prototype , i.e.

(7)

The latent prototype and generated pseudo-mask are prepared for the contrastive enhancement module.

4.3 Contrastive Enhancement

Contrastive Enhancement (CE) is a parallel path to enhance activation in unknown classes and increase attention in high similarity parts. The similarities between prototype vectors and high-level features are firstly computed to generate a prior mask

, which tells the probability of being the same categories:

(8)

where is the Hadamard product, which sets the background in feature map to zeros. Besides, the values in are normalized to 0 and 1 as

(9)

where is a small value for numerical stability. Using dense comparison, the latent prototype is expanded to , which is concatenated with query feature and prior mask as the input feature maps of decoder:

(10)

In the main path, the support prototype is generated using support mask and support feature :

(11)

We generate the prior mask between support features and query features following [28]. For each position , represents the maximum similarity among each query feature and all support features . is the min-max normalization of . Denoting as the expanded support prototype, the input of encoder in the main path is described as follows:

(12)
Figure 3: The architecture of FEM [28] and CCT [41]. In our CE module, F1 is the query features. In the main path, M2, F2 are respectively query mask and expanded support prototype. M1 is the prior mask between support features and query features. In the auxiliary path, M2 and F2 are pseudo-mask and expanded latent prototype. M1 is the prior mask between latent prototype and query features.

For the predicted probability maps in two paths, the segmentation of query images is supervised by the cross-entropy loss. The overall training loss consists of three parts as follows:

(13)

where

is the cross-entropy loss function. Additionally,

is the proposed contrastive enhancement loss, while is the specific loss in various decoders. and are respectively the predicted probability maps of query mask and pseudo-mask . and are the weight to balance the effect of multi-losses.

CE module can be adapted to various architecture of decoder. In our work, the Feature Enrichment Module (FEM) [28] and Cycle-Consistent Transformer (CCT) [41] are respectively used to obtain enriched query feature. As shown in Fig. 3, FEM is a multi-scale structure to leverage multi-level spatial information. For the multi-scale prediction . The auxiliary loss is defined as

(14)

CCT consists of two transformer blocks, where the self-alignment block aggregates global context within query features , and the cross-alignment block leverages the information in the support features . In this architecture, the predicted query mask is used to segmentation the support images to constraint the model training. Denoting as the predicted support mask , the loss can be described as

(15)

4.4 K-Shot Segmentation

For K-shot setting, the model is trained on 1-shot task and directly used for 5-shot segmentation evaluation. The support prototype vector and prior mask are respectively the average of and from K support images.

5 Experiments

5.1 Datasets and Evaluation Metric

Following previous studies [28, 39, 14], we evaluate our methods on two widely used datasets, including Pascal- and COCO-. Pascal-

is combined with PASCAL VOC 2012

[7] and SBD [12] dataset, in which 20 classes are divided into 4 splits. In each experiment, 3 splits are chosen for training and the rest for testing. During evaluation, 1000 support-query pairs from each class are sampled. COCO- is larger and more complex than Pascal-. It is built based on MS-COCO [16] dataset, which contains 80 classes and 82081 images in training set. Similar to Pascal-, 80 classes are split into 4 folds for cross-validation, but total of 20000 support-query pairs are randomly sampled for reliable evaluation [28]

. Besides, the mean intersection over union (mIoU) and foreground-background intersection-over-union (FB-IoU), the much common evaluation metric in segmentation task, are adopted in contrast experiment and ablation studies.

5.2 Implementation Details

Since our approach is flexible to integrated into other few-shot segmentation approachs, we implement different versions based on two baselines: PFENet [28] and CyCTR [41]

. VGG16, ResNet50, and ResNet101 pre-trained on ImageNet are the backbone to extract features. The feature map in block4 is used to generate pseudo-mask for contrastive enhancement. Regions with high similarity in query features are randomly selected as foreground masks, and masked GAP is adopted to generate latent prototype vectors. The similarity threshold is set to 0.65. All backbone parameters are frozen, including batchnorm.

For fair comparisons, Images are resized and cropped to 473x473 (Pascal) or 641x641 (COCO) for PFENet implementation, while the sizes are both 473x473 for CyCTR implementation. The data augmentation is the same as previous methods, including random rotation, gaussian blur, horizontal flip, and crop. Dice loss [24]

is used as loss function in CyCTR and cross-entropy in PFENet. The total epochs are set to 50 for COCO-

and 200 for Pascal-. For transformer blocks in CyCTR, AdamW [22] optimizer is used with initial learning rate and weight decay . Other blocks are optimized by SGD with initial learning rate for Pascal- and for COCO-

. The hyperparameter of weight

and are set to 1.0 and 0.1. The poly learning rate decay with 0.9 power is used. The mini-batch size is set to 4 for Pascal- and 8 for COCO-. We implement our methods using Pytorch, and all experiments are conducted on single NVIDIA GeForce RTX 3090 GPU.

5.3 Comparisons with State-of-the-Art

Methods Backbone 1-shot 5-shot
f-0 f-1 f-2 f-3 mean f-0 f-1 f-2 f-3 mean
ASR [17] VGG-16 50.2 66.4 54.3 51.8 55.7 53.7 68.5 55.0 54.8 58.0
PFENet [28] 56.9 68.2 54.4 52.4 58.0 59.0 69.1 54.8 52.9 59.0
Ours+PFENet 60.2 69.5 58.7 54.8 60.8 64.2 70.5 59.6 56.8 62.8
SAGNN [34] Resnet-50 64.7 69.6 57.0 57.2 62.1 64.9 70.0 57.0 59.3 62.8
ASR [17] 55.2 70.4 53.4 53.7 58.2 59.4 71.9 56.9 55.7 61.0
PPNet [19] 47.8 58.8 53.8 45.6 51.5 58.4 67.8 64.9 56.7 62.0
CMN [35] 64.3 70.0 57.4 59.4 62.8 65.8 70.4 57.6 60.8 63.7
CWT [23] 56.3 62.0 59.9 47.2 56.4 61.3 68.5 68.5 56.6 63.7
MM-Net [33] 62.7 70.2 57.3 57.0 61.8 62.2 71.5 57.5 62.4 63.4
ASGNet [14] 58.8 67.9 56.8 53.7 59.3 63.7 70.6 64.2 57.4 64.0
SCL [39] 63.0 70.0 56.5 57.7 61.8 64.5 70.9 57.3 58.7 62.9
CyCTR [41] 67.8 72.8 58.0 58.0 64.2 71.1 73.2 60.5 57.5 65.6
PFENet [28] 61.7 69.5 55.4 56.3 60.8 63.1 70.7 55.8 57.9 61.9
Ours+PFENet 63.3 70.6 64.6 57.7 64.1 65.5 71.8 68.3 62.6 67.1
Ours+CyCTR 67.9 73.0 61.8 58.4 65.3 69.4 73.7 61.6 60.8 66.4
DAN [31] Resnet-101 54.7 68.6 57.8 51.6 58.2 57.9 69.0 60.1 54.9 60.5
CWT [23] 56.9 65.2 61.2 48.8 58.0 62.6 70.2 68.8 57.2 64.7
ASGNet [14] 59.8 67.4 55.6 54.4 59.3 64.6 71.3 64.2 57.3 64.4
CyCTR [41] 69.3 72.7 56.5 58.6 64.3 73.5 74.0 58.6 60.2 66.6
PFENet [28] 60.5 69.4 54.4 55.9 60.1 62.8 70.4 54.9 57.6 61.4
Ours+PFENet 61.8 69.7 65.2 57.8 63.6 64.5 71.6 68.6 62.1 66.7
Ours+CyCTR 70.4 72.5 62.4 57.9 65.8 72.2 73.2 63.8 60 67.3
Table 1: Comparison with other state-of-the-arts using mIoU(%) for 1-shot and 5-shot settings on Pascal-.
Methods Backbone 1-shot 5-shot
  f-0   f-1   f-2   f-3 mean   f-0   f-1   f-2   f-3 mean
ASR [17] ResNet-50 30.6 36.7 32.7 35.4 33.9 33.1 39.5 34.2 36.2 35.8
PPNet [19] 34.5 25.4 24.3 18.6 25.7 48.3 30.9 35.7 30.2 36.2
CMN [35] 37.9 44.8 38.7 35.6 39.3 42.0 50.5 41.0 38.9 43.1
CWT [23] 32.2 36.0 31.6 31.6 32.9 40.1 43.8 39.0 42.4 41.3
MM-Net [33] 34.9 41.0 37.2 37.0 37.5 37.0 40.3 39.3 36.0 38.2
ASGNet [14] - - - - 34.6 - - - - 42.5
CyCTR* [41] 36.8 40.2 38.1 36.1 37.8 39.6 43.5 40.7 40.6 41.1
Ours+PFENet 37.2 43.6 40.9 39.1 40.2 41.7 50.4 47.1 44.5 45.9
Ours+CyCTR 36.9 40.9 39.2 39.1 39.0 41.4 42.9 43.3 43.5 42.8
SAGNN [34] ResNet-101 36.1 41.0 38.2 33.5 37.2 40.9 48.3 42.6 38.9 42.7
DAN [31] - - - - 24.4 - - - - 29.6
SCL [39] 36.4 38.6 37.5 35.4 37.0 38.9 40.5 41.5 38.7 39.9
PFENet [28] 34.3 33.0 32.3 30.1 32.4 38.5 38.6 38.2 34.3 37.4
Ours+PFENet 36.0 41.7 39.3 37.1 38.5 40.8 47.8 44.5 41.6 43.7
Table 2: Comparison with other state-of-the-arts using mIoU(%) for 1-shot and 5-shot settings on COCO-. CyCTR* is retesting on COCO2014 using the release code provided by authors, where the given results in [41] are on COCO2017.
Methods Backbone 1-shot 5-shot
ASR [17] Resnet-50 72.9 74.1
CMN [35] 72.3 72.8
ASGNet [14] 69.2 74.2
SCL [39] 71.9 72.8
PFENet [28] 73.3 73.9
Ours+PFENet 74.6 77.0
Ours+CyCTR 73.2 75.3
DAN [31] Resnet-101 71.9 72.3
ASGNet [14] 71.7 75.2
CyCTR [41] 72.9 75.0
PFENet [28] 72.9 73.5
Ours+PFENet 74.0 76.4
Ours+CyCTR 73.5 74.6
Table 4: Comparison with other approaches using FB-IoU(%) on COCO-.
Methods Backbone 1-shot 5-shot
CMN [19] ResNet-50 61.7 63.3
ASGNet [14] 60.4 67.0
Ours+PFENet 64.1 67.1
Ours+CyCTR 61.4 62.7
SAGNN [22] ResNet-101 60.9 63.4
DAN [18] 62.3 63.9
PFENet [12] 58.6 61.9
Ours+PFENet 61.6 64.7
Table 3: Comparison with other approaches using FB-IoU(%) on Pascal-.
Figure 4: Qualitative results of our method on Pascal-. The fifth column is the generated pseudo-mask. Using the prototype of sampling regions, our methods can make correct segmentation, while PFENet still segments the original foreground object.
  1-shot 5-shot
 mIoU  FB-IoU  mIoU  FB-IoU
0.40 63.5 72.7 65.4 73.3
0.50 63.3 73.7 66.3 75.8
0.65 64.1 74.6 67.1 77.0
0.80 64.0 74.8 66.4 76.8
Table 6: Ablation studies on the weight of contrastive enhancement loss on Pascal-.
  1-shot 5-shot
 mIoU  FB-IoU  mIoU  FB-IoU
0.00 62.0 72.0 63.1 72.7
0.10 64.1 74.6 67.1 77.0
0.25 63.6 74.5 67.0 76.9
1.00 59.0 71.0 62.1 72.9
Table 5: Ablation studies on the similarity threshold in latent prototype sampling on Pascal-.
Pascal- COCO-
 avg  v-1  v-2  v-3  v-4  v-5  avg  v-1  v-2  v-3  v-4  v-5
  mIoU 67.1 61.6 66.2 66.9 65.0 58.6 45.9 37.3 45.2 46.2 42.5 32.1
  FB-IoU 77.0 68.0 73.7 77.2 77.4 74.6 67.1 59.6 66.9 67.7 65.6 60.4
Table 7: Ablation studies of the 5-shot settings on Pascal-. avg is the setting presented in Section 4.4. v-k represents voting is performed on each pixel location, and the location is identified as foreground if voted by k support images.

As shown in Table 1, our approach achieves new state-of-the-art performance by comparisons with other few-shot segmentation approaches on Pascal-. In particular, our approach greatly improves the performance of two baselines with different backbones. For ResNet-101, mIoU obtains 3.5% and 5.3% improvement on 1-shot and 5-shot task for PFENet, 1.5% and 0.7% for CyCTR. It is worth noting that our approach obtains more improvement in 5-shot task. For example, our method significantly outperforms SCL [39] in 5-shot task, while performance is close in 1-shot task. Table 2 shows the comparisons with other methods on COCO-. Our approach outperforms other approaches in this complex dataset with mIoU increases of 7.8% (8.5%) and 1.2% (1.7%) respectively for PFENet and CyCTR on 1-shot (5-shot) task. We notice that there exists performance degradation with ResNet-101. The results of 1-shot and 5-shot using FB-IoU evaluation metric are given in Table 4 and 4, our approach also achieves state-of-the-art performance.

Some qualitative results on Pascal- are shown in Fig.4. Contrastive enhancement can substantially reduce the similarity neglect between prototype and query features. For instance, the baseline mistakenly segments the airplane, although the prototype is generated from the background. Our method can avoid this situation and segment parts of similarity regions. The baseline can not segment the "person" leveraging support images in the third row. There are segmentation mistakes at the object edges, but the enhanced decoder can accurately locate the object. Utilizing the latent objects in the training set, our approach attempts to reduce decoder concentration in query features and drive the decoder to focus more on similarity information.

5.4 Ablation Study

The PFENet is chosen as the baseline in ablation study to conduct all experiments. Averaging the evaluations on four splits, we present the results of mIoU and FB-IoU on Pascal-. First, we conduct experiments to study the influence of similarity threshold , which controls the similarity of regions belonging to the same category. Table 6 shows that there is slight performance degradation when the threshold decreases because lower threshold introduces noise in pseudo-mask. Additionally, the results are not sensitive to the set thresholds, which means the proposed method is relatively robust. The ablation studies on the weight of contrastive enhancement loss are given in Table 6. The batchnorm parameters are frozen in our implementation, and unified downsampling is performed in support and query features. Therefore, the performance improves 1.2% than baseline implementation. When adopting contrastive enhancement loss with weight 0.1, our method increases the mIoU score by 1.8% and 3.3% in 1-shot and 5-shot tasks. However, the loss will compel the encoder to leverage too much similarity information with a large weight, harming the model generalization. It indicates that existing framework may exploit query features implicitly for high accuracy.

In Table 7, we compare different settings on 5-shot segmentation task. The first setting is to average the prototypes from K support images for the query. The other is to perform K forward passes to make prediction, and the pixel position is foreground when k support images vote for it. Table 7 shows the performance of voting is sensitive to the value of k. There are increases in FB-IoU when k is set to 3. Taking the average remains stable in two data sets and requires less inference time, which is adopted in our work.

6 Conclusions

This paper proposes a contrastive enhancement method using latent prototypes for few-shot segmentation. Aiming to mine novel categories and intensify the prototype learning architecture, we design two modules, called Latent Prototype Sampling (LPS) and Contrastive Enhancement (CE). The LPS module leverages feature similarity to sample image regions of the same category and generates the latent prototype. Additionally, the CE module utilizes the latent prototype to make the model focus more on similar information between prototype and query feature for prediction. CE is an auxiliary path without additional parameters. Extensive experiments demonstrate our approach achieve state-of-the-art performance on Pascal- and COCO-. In the future, we will explore the application of our methods on multiple prototype learning approaches.

References

  • [1] Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39(12), 2481–2495 (2017)
  • [2] Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062 (2014)
  • [3] Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40(4), 834–848 (2017)
  • [4] Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)
  • [5] Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV). pp. 801–818 (2018)
  • [6] Dong, N., Xing, E.P.: Few-shot semantic segmentation with prototype learning. In: BMVC. vol. 3 (2018)
  • [7] Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International journal of computer vision 88(2), 303–338 (2010)
  • [8]

    Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International conference on machine learning. pp. 1126–1135. PMLR (2017)

  • [9] Fourure, D., Emonet, R., Fromont, E., Muselet, D., Tremeau, A., Wolf, C.: Residual conv-deconv grid network for semantic segmentation. arXiv preprint arXiv:1707.07958 (2017)
  • [10] Gong, Z., Zhong, P., Hu, W.: Statistical loss and analysis for deep learning in hyperspectral image classification. IEEE Transactions on Neural Networks and Learning Systems 32(1), 322–333 (2020)
  • [11] Gong, Z., Zhong, P., Yu, Y., Hu, W., Li, S.: A cnn with multiscale convolution and diversified metric for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing 57(6), 3599–3618 (2019)
  • [12] Hariharan, B., Arbeláez, P., Girshick, R., Malik, J.: Simultaneous detection and segmentation. In: European conference on computer vision. pp. 297–312. Springer (2014)
  • [13]

    He, J., Deng, Z., Zhou, L., Wang, Y., Qiao, Y.: Adaptive pyramid context network for semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7519–7528 (2019)

  • [14] Li, G., Jampani, V., Sevilla-Lara, L., Sun, D., Kim, J., Kim, J.: Adaptive prototype learning and allocation for few-shot segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8334–8343 (2021)
  • [15] Li, Z., Zhou, F., Chen, F., Li, H.: Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835 (2017)
  • [16] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740–755. Springer (2014)
  • [17] Liu, B., Ding, Y., Jiao, J., Ji, X., Ye, Q.: Anti-aliasing semantic reconstruction for few-shot semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9747–9756 (2021)
  • [18] Liu, J., Song, L., Qin, Y.: Prototype rectification for few-shot learning. In: European Conference on Computer Vision. pp. 741–756. Springer (2020)
  • [19] Liu, Y., Zhang, X., Zhang, S., He, X.: Part-aware prototype network for few-shot semantic segmentation. In: European Conference on Computer Vision. pp. 142–158. Springer (2020)
  • [20] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 10012–10022 (2021)
  • [21] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3431–3440 (2015)
  • [22] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
  • [23]

    Lu, Z., He, S., Zhu, X., Zhang, L., Song, Y.Z., Xiang, T.: Simpler is better: Few-shot semantic segmentation with classifier weight transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 8741–8750 (2021)

  • [24]

    Milletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). pp. 565–571. IEEE (2016)

  • [25] Min, J., Kang, D., Cho, M.: Hypercorrelation squeeze for few-shot segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 6941–6952 (2021)
  • [26] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015)
  • [27] Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. Advances in neural information processing systems 30 (2017)
  • [28] Tian, Z., Zhao, H., Shu, M., Yang, Z., Li, R., Jia, J.: Prior guided feature enrichment network for few-shot segmentation. IEEE transactions on pattern analysis and machine intelligence (2020)
  • [29] Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. Advances in neural information processing systems 29 (2016)
  • [30] Wang, H., Zhang, X., Hu, Y., Yang, Y., Cao, X., Zhen, X.: Few-shot semantic segmentation with democratic attention networks. In: European Conference on Computer Vision. pp. 730–746. Springer (2020)
  • [31] Wang, H., Zhang, X., Hu, Y., Yang, Y., Cao, X., Zhen, X.: Few-shot semantic segmentation with democratic attention networks. In: European Conference on Computer Vision. pp. 730–746. Springer (2020)
  • [32] Wang, Y., Yao, Q., Kwok, J.T., Ni, L.M.: Generalizing from a few examples: A survey on few-shot learning. ACM computing surveys (csur) 53(3), 1–34 (2020)
  • [33] Wu, Z., Shi, X., Lin, G., Cai, J.: Learning meta-class memory for few-shot semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 517–526 (2021)
  • [34] Xie, G.S., Liu, J., Xiong, H., Shao, L.: Scale-aware graph neural network for few-shot semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5475–5484 (2021)
  • [35] Xie, G.S., Xiong, H., Liu, J., Yao, Y., Shao, L.: Few-shot semantic segmentation with cyclic memory network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7293–7302 (2021)
  • [36] Yang, B., Liu, C., Li, B., Jiao, J., Ye, Q.: Prototype mixture models for few-shot semantic segmentation. In: European Conference on Computer Vision. pp. 763–778. Springer (2020)
  • [37] Yang, L., Zhuo, W., Qi, L., Shi, Y., Gao, Y.: Mining latent classes for few-shot segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 8721–8730 (2021)
  • [38] Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: Regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 6023–6032 (2019)
  • [39] Zhang, B., Xiao, J., Qin, T.: Self-guided and cross-guided learning for few-shot segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8312–8321 (2021)
  • [40] Zhang, C., Lin, G., Liu, F., Yao, R., Shen, C.: Canet: Class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5217–5226 (2019)
  • [41] Zhang, G., Kang, G., Yang, Y., Wei, Y.: Few-shot segmentation via cycle-consistent transformer. Advances in Neural Information Processing Systems 34 (2021)
  • [42] Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
  • [43] Zhang, X., Wei, Y., Yang, Y., Huang, T.S.: Sg-one: Similarity guidance network for one-shot semantic segmentation. IEEE Transactions on Cybernetics 50(9), 3855–3865 (2020)
  • [44] Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2881–2890 (2017)
  • [45]

    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2921–2929 (2016)