Weakly Supervised Few-shot Object Segmentation using Co-Attention with Visual and Semantic Inputs

01/26/2020 ∙ by Mennatullah Siam, et al. ∙ 9

Significant progress has been made recently in developing few-shot object segmentation methods. Learning is shown to be successful in a few segmentation settings, including pixel-level, scribbles and bounding boxes. These methods can be classified as "strongly labelled" support images because significant image editing efforts are required to provide the labeling. This paper takes another approach, i.e., only requiring image-level classification data for few-shot object segmentation. The large amount of image-level labelled data signifies this approach, if successful. The problem is challenging because there is no obvious features that can be used for segmentation in the image-level data. We propose a novel multi-modal interaction module for few-shot object segmentation that utilizes a co-attention mechanism using both visual and word embedding. Our model using image-level labels achieves 4.8 improvement over previously proposed image-level few-shot object segmentation. It also outperforms state-of-the-art methods that use weak bounding box supervision on PASCAL-5i. Our results show that few-shot segmentation benefits from utilizing word embeddings, and that we are able to perform few-shot segmentation using stacked joint visual semantic processing with weak image-level labels. We further propose a novel setup, Temporal Object Segmentation for Few-shot Learning (TOSFL) for videos. TOSFL requires only image-level labels for the first frame in order to segment objects in the following frames. TOSFL provides a novel benchmark for video segmentation, which can be used on a variety of public video data such as Youtube-VOS, as demonstrated in our experiment.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Overview of stacked co-attention to relate the support set and query image using image-level labels. Nx: Co-attention stacked N times. “K-shot” refers to using K support images.

Existing literature in few-shot object segmentation has mainly relied on manually labelled segmentation masks. A few recent works (Rakelly et al., 2018; Zhang et al., 2019b; Wang et al., 2019) started to conduct experiments using weak annotations such as scribbles or bounding boxes. However, these weak forms of supervision involve more manual work compared to image level labels, which can be collected from text and images publicly available on the web. Limited research has been conducted on using image-level supervision for few-shot segmentation (Raza et al., 2019). Most current weakly supervised few-shot segmentation methods lag significantly behind their strongly supervised counterparts.

On the other hand, deep semantic segmentation networks are very successful when trained and tested on relatively large-scale manually labelled datasets such as PASCAL-VOC (Everingham et al., 2015) and MS-COCO (Lin et al., 2014)

. However, the number of object categories they cover is still limited despite the significant sizes of the data used. The limited number of annotated objects with pixel-wise labels included in existing datasets restricts the applicability of deep learning in inherently open-set domains such as robotics 

(Dehghan et al., 2019; Pirk et al., 2019). Human visual system has the ability to generalize to new categories from a few labelled samples. It has been shown that adults and even children demonstrate a phenomenon known as “stimulus equivalence” when novel concepts are taught by engaging learning to a combination of visual, textual and verbal stimuli (Sidman, 2009). The relations learned from one modality to another (e.g. written words/pictures related to spoken words) can be transferred to accelerate the learning of new concepts and new relations (e.g. pictures of objects in relation to written words) via the stimulus equivalence principle. Inspired by this, we propose a multi-modal interaction module to bootstrap the efficiency of weakly supervised few-shot object segmentation by combining the visual input with neural word embeddings. Our method iteratively guides a bi-directional co-attention between the support and the query sets using both visual and neural word embedding inputs, using only image-level supervision as shown in Fig. 1. It outperforms (Raza et al., 2019) by 4.8% and improves over methods that use bounding box supervision (Zhang et al., 2019b; Wang et al., 2019).

Most work in few-shot segmentation considers the static setting where query and support images do not have temporal relations. However, in real world applications such as robotics, segmentation methods can benefit from temporal continuity and multiple viewpoints. For real time segmentation, it may be of tremendous benefits to utilize temporal knowledge existing in video sequences. Observations that pixels moving together mostly belong to the same object seem to be very common in videos, and it can be exploited to improve segmentation accuracy. We propose a novel setup, temporal object segmentation with few-shot learning (TOSFL), where support and query images are temporally related. The TOSFL setup for video object segmentation generalizes to novel object classes easily as can be seen in our experiments on Youtube-VOS dataset (Xu et al., 2018). TOSFL only requires image-level labels for the first frames (support images) to segment the objects that appear in the frames that follow. The TOSFL setup is interesting because it is more similar to the nature of learning of objects by human than the strongly supervised static segmentation setup.

Youtube-VOS (Xu et al., 2018) provides a way to evaluate on unseen categories. However, it does not utilize the category labels in the segmentation model. Our setup relies on the image-level label for the support image to segment different parts from the query image conditioned on the word embeddings of this image-level label. In order to ensure the evaluation for the few-shot method is not biased to a certain category, it is best to split into multiple folds and evaluate on different ones similar to (Shaban et al., 2017).

1.1 Contributions

  • We propose a novel few-shot object segmentation algorithm based on a multi-modal interaction module trained using image-level supervision. It relies on a multi-stage attention mechanism and uses both visual and semantic representations to relate relevant spatial locations in the support and query images.

  • We propose a novel weakly supervised few-shot video object segmentation setup. It complements the existing few-shot object segmentation benchmarks by considering a practically important use case not covered by previous datasets. Video sequences are provided instead of static images which can simplify the few-shot learning problem.

  • We conduct a comparative study of different architectures proposed in this paper to solve few-shot object segmentation with image-level supervision. Our method compares favourably against the state-of-the-art methods relying on pixel-level supervision and outperforms the most recent methods using weak annotations (Raza et al., 2019; Wang et al., 2019; Zhang et al., 2019b).

2 Related Work

Figure 2: Architecture of Few-Shot Object segmentation model with co-attention. The operator denotes concatenation,

denotes element-wise multiplication. Only the decoder and multi-modal interaction module parameters are learned, while the encoder is pretrained on ImageNet.

2.1 Few-shot Object Segmentation

Shaban et al. (2017) proposed the first few-shot segmentation method using a second branch to predict the final segmentation layer parameters. Rakelly et al. (2018) proposed a guidance network for few-shot segmentation where the guidance branch receives the support set image-label pairs. Dong and Xing (2018) utilized the second branch to learn prototypes. Zhang et al. (2019b) proposed a few-shot segmentation method based on a dense comparison module with a siamese-like architecture that uses masked average pooling to extract features on the support set, and an iterative optimization module to refine the predictions. Siam et al. (2019) proposed a method to perform few-shot segmentation using adaptive masked proxies to directly predict the parameters of the novel classes. Zhang et al. (2019a) in a more recent work proposed a pyramid graph network which learns attention weights between the support and query sets for further label propagation. Wang et al. (2019) proposed prototype alignment by performing both support-to-query and query-to-support few-shot segmentation using prototypes.

The previous literature focused mainly on using strongly labelled pixel-level segmentation masks for the few examples in the support set. It is labour intensive and impractical to provide such annotations for every single novel class, especially in certain robotics applications that require to learn online. A few recent works experimented with weaker annotations based on scribbles and/or bounding boxes (Rakelly et al., 2018; Zhang et al., 2019b; Wang et al., 2019). In our opinion, the most promising approach to solving the problem of intense supervision requirements in the few-shot segmentation task, is to use publicly available web data with image-level labels. Raza et al. (2019) made a first step in this direction by proposing a weakly supervised method that uses image-level labels. However, the method lags significantly behind other approaches that use strongly labelled data.

2.2 Attention Mechanisms

Attention was initially proposed for neural machine translation models 

(Bahdanau et al., 2014). Several approaches were proposed for utilizing attention. Yang et al. (2016) proposed a stacked attention network which learns attention maps sequentially on different levels. Lu et al. (2016) proposed co-attention to solve a visual question and answering task by alternately shifting attention between visual and question representations. Lu et al. (2019) used co-attention in video object segmentation between frames sampled from a video sequence. Hsieh et al. (2019) rely on attention mechanism to perform one-shot object detection. However, they mainly use it to attend to the query image since the given bounding box provides them with the region of interest in the support set image. To the best of our knowledge, this work is the first one to explore the bidirectional attention between support and query sets as a mechanism for solving the few-shot image segmentation task with image-level supervision.

3 Proposed Method

The human perception system is inherently multi-modal. Inspired from this and to leverage the learning of new concepts we propose a multi-modal interaction module that embeds semantic conditioning in the visual processing scheme as shown in Fig. 2. The overall model consists of: (1) Encoder. (2) Multi-modal Interaction module. (3) Segmentation Decoder. The multi-modal interaction module is described in detail in this section while the encoder and decoder modules are explained in Section 5.1. We follow a 1-way -shot setting similar to (Shaban et al., 2017).

(a) V+S
(b) V
(c) S
Figure 3: Different variants for image-level labelled few-shot object segmentation. V+S: Stacked Co-Attention with Visual and Semantic representations. V: Co-Attention with Visual features only. S: Conditioning on semantic representation only from word embeddings.

3.1 Multi-Modal Interaction Module

One of the main challenges in dealing with the image-level annotation in few-shot segmentation is that quite often both support and query images may contain a few salient common objects from different classes. Inferring a good prototype for the object of interest from multi-object support images without relying on pixel-level cues or even bounding boxes becomes particularly challenging. Yet, it is exactly in this situation, that we can expect the semantic word embeddings to be useful at helping to disambiguate the object relationships across support and query images. Suppose the visual and the semantic spaces are sufficiently aligned and meaningful relations between words exist in the semantic space (similar concepts are closer together than the dissimilar ones). Then we can expect the word embeddings to provide a sufficiently strong signal for unambiguously relating the required object in the query set and the support set. For example, suppose a support set and a query image both contain a dog and a cat. Suppose that the task is to segment a dog against background. It is not sufficient to relate the support and the query image through an attention mechanism, we will likely see that the images are related in the pixels pertaining to a dog as well as in those pertaining to a cat. Suppose now that we also obtain a word embedding (Mikolov et al., 2013)

of the class label dog. We hypothesise that this can be used to single out feature vectors that belong to class label dog and to suppress the others in the joint visual semantic space.

Below we discuss the technical details behind the implementation of this idea depicted in Fig. 2. Initially, in a -shot setting, a base network is used to extract features from support set image and from the query image , which we denote as and . Here and denote the height and width of feature maps, respectively, while denotes the number of feature channels. Furthermore, a projection layer is used on the semantic word embeddings to construct (). It is then spatially tiled and concatenated with the visual features resulting in flattened matrix representations and

. An affinity matrix

is computed to capture the similarity between them via a fully connected layer learning the correlation between feature channels:

The affinity matrix relates each pixel in and . A softmax operation is performed on row-wise and column-wise depending on the desired direction of relation:

For example, column contains the relevance of the spatial location in with respect to all spatial locations of , where . The normalized affinity matrix is used to compute attention summaries and :

The attention summaries are further reshaped such that and gated using a gating function with learnable weights and bias :

Here the operator denotes element-wise multiplication. The gating function restrains the output to the interval

using a sigmoid activation function

in order to mask the attention summaries. The gated attention summaries are concatenated with the original visual features to construct the final output from the attention module to the decoder.

3.2 Stacked Gated Co-Attention

We propose to stack the multi-modal interaction module described in Section 3.1 to learn an improved representation. Stacking allows for multiple iterations between the support and the query images. The co-attention module has two streams that are responsible for processing the query image and the support set images respectively. The inputs to the co-attention module, and , represent the visual features at iteration for query image and support image respectively. In the first iteration, and are the output visual features from the encoder. Each multi-modal interaction then follows the recursion :

The nonlinear projection

is performed on the output from each iteration, which is composed of a 1x1 convolutional layer followed by a ReLU activation function. We use residual connections in order to improve the gradient flow and prevent vanishing gradients. The support set features

are computed similarly.

4 Temporal Object Segmentation with Few-shot Learning Setup

We propose a novel few-shot video object segmentation (VOS) task. In this task, the image-level label of the first frame is provided to learn object segmentation in the sampled frames from the ensuing sequence. This is a more challenging task than the one relying on the pixel-level supervision in semi-supervised VOS. The task is designed as a binary segmentation problem and the categories are split in multiple folds, consistent with existing few-shot segmentation tasks defined on Pascal- and MS-COCO. This design ensures that the proposed task assesses the ability of few-shot video segmentation algorithms to generalize over unseen classes. We utilize Youtube-VOS dataset training data which has 65 classes, and we split them into 5 folds. Each fold has 13 classes that are used as novel classes, while the rest are used in the meta-training phase. A randomly sampled class and sequence are used to construct the support set and query images . For each query image a binary segmentation mask is constructed by labelling all the instances belonging to as foreground. Accordingly, the same image can have multiple binary segmentation masks depending on the sampled .

5 Experiments

In this section we demonstrate results of experiments conducted on the PASCAL- dataset (Shaban et al., 2017) compared to state of the art methods in section 5.2. Not only do we set strong baselines for image level labelled few shot segmentation and outperform previously proposed work (Raza et al., 2019), but we also perform close to the state of the art conventional few shot segmentation methods that use detailed pixel-wise segmentation masks. We then demonstrate the results for the different variants of our approach depicted in Fig. 3 and experiment with the proposed TOSFL setup in section 5.3.

1-shot 5-shot
Method Type 1 2 3 4 mIoU bIoU 1 2 3 4 mIoU
FG-BG P - - - - - 55.1 - - - - -
OSLSM (Shaban et al., 2017) P 33.6 55.3 40.9 33.5 40.8 - 35.9 58.1 42.7 39.1 43.9
CoFCN (Rakelly et al., 2018) P 36.7 50.6 44.9 32.4 41.1 60.1 37.5 50.0 44.1 33.9 41.4
PLSeg (Dong and Xing, 2018) P - - - - - 61.2 - - - - -
AMP (Siam et al., 2019) P 41.9 50.2 46.7 34.7 43.4 62.2 41.8 55.5 50.3 39.9 46.9
PANet (Wang et al., 2019) P 42.3 58.0 51.1 41.2 48.1 66.5 51.8 64.6 59.8 46.5 55.7
CANet (Zhang et al., 2019b) P 52.5 65.9 51.3 51.9 55.4 66.2 55.5 67.8 51.9 53.2 57.1
PGNet (Zhang et al., 2019a) P 56.0 66.9 50.6 50.4 56.0 69.9 57.7 68.7 52.9 54.6 58.5
CANet (Zhang et al., 2019b) BB - - - - 52.0 - - - - - -
PANet (Wang et al., 2019) BB - - - - 45.1 - - - - - 52.8
(Raza et al., 2019) IL - - - - - 58.7 - - - - -
Ours(V+S)-1 IL 49.5 65.5 50.0 49.2 53.5 65.6 - - - - -
Ours(V+S)-2 IL 42.5 64.8 48.1 46.5 50.5 64.1 45.9 65.7 48.6 46.6 51.7
Table 1: Quantitative results for 1-way, 1-shot segmentation on the PASCAL- dataset showing mean-Iou and binary-IoU. P: stands for using pixel-wise segmentation masks for supervision. IL: stands for using weak supervision from Image-Level labels. BB: stands for using bounding boxes for weak supervision.

5.1 Experimental Setup

Network Details: We utilize a ResNet-50 (He et al., 2016) encoder pre-trained on ImageNet (Deng et al., 2009) to extract visual features. The segmentation decoder is comprised of an iterative optimization module (IOM) (Zhang et al., 2019b) and an atrous spatial pyramid pooling (ASPP) (Chen et al., 2017a, b)

. The IOM module takes the output feature maps from the multi-modal interaction module and the previously predicted probability map in a residual form.

Meta-Learning Setup: We sample 12,000 tasks during the meta-training stage. In order to evaluate test performance, we average accuracy over 5000 tasks with support and query sets sampled from the meta-test dataset belonging to classes

. We perform 5 training runs with different random generator seeds and report the average of the 5 runs and the 95% confidence interval.

Evaluation Protocol: PASCAL- splits PASCAL-VOC 20 classes into 4 folds each having 5 classes. The mean IoU and binary IoU are the two metrics used for the evaluation process. The mIoU computes the intersection over union for all 5 classes within the fold and averages them neglecting the background. Whereas the bIoU metric proposed by Rakelly et al. (2018) computes the mean of foreground and background IoU in a class agnostic manner. We have noticed some deviation in the validation schemes used in previous works. Zhang et al. (2019b) follow a procedure where the validation is performed on the test classes to save the best model, whereas Wang et al. (2019) do not perform validation and rather train for a fixed number of iterations. We choose the more challenging approach in (Wang et al., 2019).

Training Details: During the meta-training, we freeze ResNet-50 encoder weights while learning both the multi-modal interaction module and the decoder. We train all models using momentum SGD with learning rate

that is reduced by 0.1 at epoch 35, 40 and 45 and momentum 0.9. L2 regularization with a factor of 5x

is used to avoid over-fitting. Batch size of 4 and input resolution of 321321 are used during training with random horizontal flipping and random centered cropping for the support set. An input resolution of 500500 is used for the meta-testing phase similar to (Shaban et al., 2017). In each fold the model is meta-trained for a maximum number of 50 epochs on the classes outside the test fold.

(a) ’bicycle’
(b) ’bottle’
(c) ’bird’
(d) ’bicycle’
(e) ’bird’
(f) ’boat’
Figure 4: Qualitative evaluation on PASCAL- 1-way 1-shot. The support set and prediction on the query image are shown in pairs.

5.2 Comparison to the state-of-the-art

We compare the result of our best variant (see Fig. 3), i.e: Stacked Co-Attention (V+S) against the other state of the art methods for 1-way 1-shot and 5-shot segmentation on PASCAL- in Table 1. We report the results for different validation schemes. Ours(V+S)-1 follows (Zhang et al., 2019b) and Ours(V+S)-2 follows (Wang et al., 2019). Without the utilization of segmentation mask or even sparse annotations, our method with the least supervision of image level labels performs (53.5%) close to the current state of the art strongly supervised methods (56.0%) in 1-shot case and outperforms the ones that use bounding box annotations. It improves over the previously proposed image-level supervised method with a significant margin (4.8%). For the -shot extension of our method we perform average of the attention summaries during the meta-training on the -shot samples from the support set. Table 2 demonstrates results on MS-COCO (Lin et al., 2014) compared to the state of the art method using pixel-wise segmentation masks for the support set.

Method Type 1-shot 5-shot
PANet (Wang et al., 2019) P 20.9 29.7
Ours-(V+S) IL 15.0 15.6
Table 2: Quantitative Results on MS-COCO Few-shot 1-way.

5.3 Ablation Study

We perform an ablation study to evaluate different variants of our method depicted in Fig. 3. Table 3 shows the results on the three variants we proposed on PASCAL-. It clearly shows that using the visual features only (V-method), lags 5% behind utilizing word embeddings in the 1-shot case. This is mainly due to having multiple common objects between the support set and the query image. Semantic representation obviously helps to resolve the ambiguity and improves the result significantly as shown in Figure 5. Going from 1 to 5 shots, the V-method improves, because multiple shots are likely to repeatedly contain the object of interest and the associated ambiguity decreases, but still it lags behind both variants supported by semantic input. Interestingly, our results show that the baseline of conditioning on semantic representation is a very competitive variant: in the 1-shot case it even outperforms the (V+S) variant. However, the bottleneck in using the simple scheme to integrate semantic representation depicted in Fig. 2(c) is that it is not able to benefit from multiple shots in the support set. The (V+S)-method in the 5-shot case improves over the 1-shot case by 1.2% on average over the 5 runs, which confirms its ability to effectively utilize more abundant visual features in the 5-shot case. One reason could explain the strong performance of the (S) variant. In the case of a single shot, the word embedding pretrained on a massive text database may provide a more reliable guidance signal than a single image containing multiple objects that does not necessarily have visual features close to the object in the query image.

Table 4 shows the results on our proposed novel video segmentation task, comparing variants of the proposed approach. As previously, the baseline V-method based on co-attention module with no word embeddings, similar to (Lu et al., 2019), lags behind both S- and (V+S)-methods. It is worth noting that unlike the conventional video object segmentation setups, the proposed video object segmentation task poses the problem as a binary segmentation task conditioned on the image-level label. Both support and query frames can have multiple salient objects appearing in them, however the algorithm has to segment only one of them corresponding to the image-level label provided in the support frame. According to our observations, this multi-object situation occurs in this task much more frequently than e.g. in the case of Pascal-. Additionally, not only the target, but all the nuisance objects present in the video sequence will relate via different viewpoints or deformations. We demonstrate in Table 4 that the (V+S)-method’s joint visual and semantic processing in such scenario clearly provides significant gain.

Method 1-shot 5-shot
V
S
V+S
Table 3: Ablation Study on 4 folds of Pascal- for few-shot segmentation for different variants showing mean-IoU. V: visual, S: semantic. V+S: both features.
Method 1 2 3 4 5 Mean-IoU
V 40.8 34.0 44.4 35.0 35.5
S 42.7 40.8 48.7 38.8 37.6
V+S 46.1 42.0 50.7 41.2 39.2
Table 4: Quantitative Results on Youtube-VOS One-shot weakly supervised setup showing IoU per fold and mean-IoU over all folds similar to pascal-. V: visual, S: semantic. V+S: both features.
(a) Label ’Bike’
(b) Prediction (V)
(c) Prediction (V+S)
Figure 5: Visual Comparison between the predictions from two variants of our method.

6 Conclusion

In this paper we proposed a multi-modal interaction module that relates the support set image and query image using both visual and word embeddings. We proposed to meta-learn a stacked co-attention module that guides the segmentation of the query based on the support set features and vice versa. The two main takeaways from the experiments are that (i) few-shot segmentation significantly benefits from utilizing word embeddings and (ii) it is viable to perform high quality few-shot segmentation using stacked joint visual semantic processing with weak image-level labels.

References

  • D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §2.2.
  • L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille (2017a) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40 (4), pp. 834–848. Cited by: §5.1.
  • L. Chen, G. Papandreou, F. Schroff, and H. Adam (2017b) Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587. Cited by: §5.1.
  • M. Dehghan, Z. Zhang, M. Siam, J. Jin, L. Petrich, and M. Jagersand (2019) Online object and task learning via human robot interaction. In 2019 International Conference on Robotics and Automation (ICRA), pp. 2132–2138. Cited by: §1.
  • J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    ,
    pp. 248–255. Cited by: §5.1.
  • N. Dong and E. P. Xing (2018) Few-shot semantic segmentation with prototype learning. In BMVC, Vol. 3, pp. 4. Cited by: §2.1, Table 1.
  • M. Everingham, S.M. A. Eslami, L. Van Gool, C. K.I. Williams, J. Winn, and A. Zisserman (2015) The pascal visual object classes challenge: a retrospective. International journal of computer vision 111 (1), pp. 98–136. Cited by: §1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §5.1.
  • T. Hsieh, Y. Lo, H. Chen, and T. Liu (2019) One-shot object detection with co-attention and co-excitation. In Advances in Neural Information Processing Systems, pp. 2721–2730. Cited by: §2.2.
  • T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §1, §5.2.
  • J. Lu, J. Yang, D. Batra, and D. Parikh (2016) Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pp. 289–297. Cited by: §2.2.
  • X. Lu, W. Wang, C. Ma, J. Shen, L. Shao, and F. Porikli (2019) See more, know more: unsupervised video object segmentation with co-attention siamese networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3623–3632. Cited by: §2.2, §5.3.
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §3.1.
  • S. Pirk, M. Khansari, Y. Bai, C. Lynch, and P. Sermanet (2019) Online object representations with contrastive learning. arXiv preprint arXiv:1906.04312. Cited by: §1.
  • K. Rakelly, E. Shelhamer, T. Darrell, A. Efros, and S. Levine (2018) Conditional networks for few-shot semantic segmentation. Cited by: §1, §2.1, §2.1, §5.1, Table 1.
  • H. Raza, M. Ravanbakhsh, T. Klein, and M. Nabi (2019) Weakly supervised one shot segmentation. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 0–0. Cited by: 3rd item, §1, §1, §2.1, Table 1, §5.
  • A. Shaban, S. Bansal, Z. Liu, I. Essa, and B. Boots (2017) One-shot learning for semantic segmentation. arXiv preprint arXiv:1709.03410. Cited by: §1, §2.1, §3, §5.1, Table 1, §5.
  • M. Siam, B. N. Oreshkin, and M. Jagersand (2019) AMP: adaptive masked proxies for few-shot segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5249–5258. Cited by: §2.1, Table 1.
  • M. Sidman (2009) Equivalence relations and behavior: an introductory tutorial. The Analysis of verbal behavior 25 (1), pp. 5–17. Cited by: §1.
  • K. Wang, J. H. Liew, Y. Zou, D. Zhou, and J. Feng (2019) PANet: few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9197–9206. Cited by: 3rd item, §1, §1, §2.1, §2.1, §5.1, §5.2, Table 1, Table 2.
  • N. Xu, L. Yang, Y. Fan, D. Yue, Y. Liang, J. Yang, and T. Huang (2018) Youtube-vos: a large-scale video object segmentation benchmark. arXiv preprint arXiv:1809.03327. Cited by: §1, §1.
  • Z. Yang, X. He, J. Gao, L. Deng, and A. Smola (2016) Stacked attention networks for image question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 21–29. Cited by: §2.2.
  • C. Zhang, G. Lin, F. Liu, J. Guo, Q. Wu, and R. Yao (2019a) Pyramid graph networks with connection attentions for region-based one-shot semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9587–9595. Cited by: §2.1, Table 1.
  • C. Zhang, G. Lin, F. Liu, R. Yao, and C. Shen (2019b) CANet: class-agnostic segmentation networks with iterative refinement and attentive few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5217–5226. Cited by: 3rd item, §1, §1, §2.1, §2.1, §5.1, §5.1, §5.2, Table 1.