Manifold-driven Attention Maps for Weakly Supervised Segmentation

by   Sukesh Adiga V, et al.

Segmentation using deep learning has shown promising directions in medical imaging as it aids in the analysis and diagnosis of diseases. Nevertheless, a main drawback of deep models is that they require a large amount of pixel-level labels, which are laborious and expensive to obtain. To mitigate this problem, weakly supervised learning has emerged as an efficient alternative, which employs image-level labels, scribbles, points, or bounding boxes as supervision. Among these, image-level labels are easier to obtain. However, since this type of annotation only contains object category information, the segmentation task under this learning paradigm is a challenging problem. To address this issue, visual salient regions derived from trained classification networks are typically used. Despite their success to identify important regions on classification tasks, these saliency regions only focus on the most discriminant areas of an image, limiting their use in semantic segmentation. In this work, we propose a manifold driven attention-based network to enhance visual salient regions, thereby improving segmentation accuracy in a weakly supervised setting. Our method generates superior attention maps directly during inference without the need of extra computations. We evaluate the benefits of our approach in the task of segmentation using a public benchmark on skin lesion images. Results demonstrate that our method outperforms the state-of-the-art GradCAM by a margin of  22


Weakly Supervised Attention-based Models Using Activation Maps for Citrus Mite and Insect Pest Classification

Citrus juices and fruits are commodities with great economic potential i...

Joint Learning of Saliency Detection and Weakly Supervised Semantic Segmentation

Existing weakly supervised semantic segmentation (WSSS) methods usually ...

Deep weakly-supervised learning methods for classification and localization in histology images: a survey

Using state-of-the-art deep learning models for the computer-assisted di...

Weakly Supervised Learning Guided by Activation Mapping Applied to a Novel Citrus Pest Benchmark

Pests and diseases are relevant factors for production losses in agricul...

Learning Rich Representations For Structured Visual Prediction Tasks

We describe an approach to learning rich representations for images, tha...

Weakly Supervised Few-shot Object Segmentation using Co-Attention with Visual and Semantic Inputs

Significant progress has been made recently in developing few-shot objec...

Segmentation hiérarchique faiblement supervisée

Image segmentation is the process of partitioning an image into a set of...

1 Introduction

Semantic segmentation is a mainstay in medical imaging, as it serves for the diagnosis and treatment of many diseases. In the last years, we have witnessed the advancements in segmentation approaches based on deep learning, mainly using Convolutional Neural Networks (CNN). This progress is partly due to the availability of large amounts of labelled training datasets

[4, 9]. Nevertheless, obtaining such large labelled data involves pixel-wise annotation of thousands of images, which is a laborious task, prone to subject-variability. This is further magnified in medical imaging since segmentation requires specific expert knowledge.

Recently, weakly supervised segmentation (WSS) has emerged as an alternative to alleviate the need for large pixel-level labelled training datasets. These labels can come in the form of image-level labels [18], scribbles [14], points [1], bounding boxes [20] or direct losses [10]. Among these supervisory signals, image-level labels are typically preferred, as they are easier and inexpensive to obtain [1]. This form of annotation assumes that by assigning a global label, the model will be able to find common patterns that are present in positive samples (containing the class) and do not exist on negative examples.

If learning relies entirely on image-level labels, the unique known information is the object category. In this scenario, learning discriminative features that lead to accurate pixel-level segmentation is a challenging problem, since the association between semantic categories (global) and spatial information (local) is not provided. To address this limitation, visual salient regions —derived from complementary tasks, such as classification— are typically integrated during training [17, 19]. Particularly, class activation maps (CAM) [27] have gained popularity in identifying saliency regions based on image labels. It is achieved by associating feature maps on the last layers and weighting their activations. In practice, this boils down to replace fully connected layers in a classification network by a global average pooling (GAP) layer, which generates the class-specific feature maps, named as CAM. The main drawback of this approach is that the generated saliency maps are typically spread around the target object, only focusing on the most discriminant areas. This limits its usability as pixel-level supervision for semantic segmentation. To enhance the generated saliency regions, some alternatives based on back-propagation (GradCAM [23]) or super-pixels (SP-CAM [13]) have been proposed. Nevertheless, these method demands additional gradients computation [23] or supervision [13].

The literature on weakly supervised segmentation (WSS) in medical imaging remains scarce with few alternatives to address this problem. While few methods resort to direct losses, hence requiring additional priors, such as the target size [8, 10], other approaches rely on stronger forms of supervision, e.g., bounding boxes [20]. Tackling this from a perspective of image-level labels typically uses visual features, which has not been much investigated [5, 6, 16]. For example, Nguyen et al. [16] proposed CAM-based approach for the segmentation of uveal melanoma. In their method, the CAMs generated by the classification network are further refined by an active shape model and CRF [12]. Enhanced maps were later employed as segmentation proposals to train a segmentation CNN. More recently, CAMs derived from image-level labels were combined with attention scores to refine lesion segmentation in brain images [26]. By doing that, they demonstrated the improvement in performance compared to the vanilla version of CAMs. However, these methods rely on a trained classification network or employ an auxiliary classification branch to generate the visual saliency regions. Thereafter, their strategy typically integrates with complex models to enhance the performance of a final segmentation. We argue that, instead, employing visual manifold networks is a new, better performing approach to discriminate identified saliency regions. Our motivation is that these networks map input images into a manifold space, where similarities between images are kept. Enforcing attention to relevant visual regions should thus lead to consistent feature representations for two different images belonging to the same class. This motivates the use of a manifold network that jointly generates robust feature representations for the manifold task and learns consistent visual attention regions of images from the same category. Also, it is not feasible to apply GradCAM directly on manifold networks [2], whereas our attention module in the manifold network directly produces attention maps.

Our contribution.

We propose to derive visual attention from a manifold learning network to leverage the generated visual clues as strong proxies for semantic segmentation. Specifically, we integrate the attention module to (i) obtain visual attention directly, (ii) focus the attention on the target object for a manifold learning task, and (iii) serves as proxy labels for segmentation. As we demonstrate in our experiments, the proposed method provides better attention maps than state-of-the-art GradCAM applied on classification networks. We evaluate the proposed method with extensive experiments on a public benchmark of a skin lesion dataset, ISIC [24, 3] in the task of weakly supervised segmentation.

Figure 1: Schematic of the proposed pipeline for weakly supervised segmentation using only image-level labels. In the manifold learning phase, the attention maps are produced while learning manifold space using image-level labels. We use these attention maps as proxy-labels in the segmentation network for pixel-level prediction.

2 Method

The pipeline of our proposed weakly supervised learning is shown in Fig 1. The main idea is to learn attention maps from a manifold learning network trained on image-labels, which can be used as image proposal to train a segmentation CNN, mimicking full-supervision. To achieve this, we first introduce an attention module in the manifold learning pipeline, which generates an attention map for each image. The underlying manifold learning pipeline is inspired by the recent divide and conquer metric learning (DCML) method [22], which simplifies the learning task by dividing the original manifold space into several subspaces. The generated attention maps are then used as proxy-labels to train a segmentation network. In the following sections, we first describe our proposed attentive manifold learning formulation and weakly supervised segmentation setting.

2.1 Attentive Manifold Learning

Let be the training data where is an image of width and height , and its corresponding image-level label, with as the total number of classes. Our aim is to learn the attention maps from the manifold network. To define the attention module, let be an input image. The feature extractor produces a feature map , where . If we denote as the attention module, the attention map for a given input image can be defined as:


The generated attention map is multiplied with each feature map , where is the elementwise product. This helps to focus on the target objects during the manifold learning task and facilitates the generation of an attention map directly during inference. The attentive feature maps are then combined to produce a

dimensional vector by using global average pooling (GAP), which acts as a regularizer

[15]. The resulted features are mapped into the manifold space using a dense layer, as shown in Fig 1.

To learn the manifold space, we employ a metric learning approach, i.e. , where is the dimension of manifold space. Metric learning maps the semantically similar images in the input space (i.e., same class) onto metrically close points in the learned manifold . Similarly, semantically dissimilar images in should be mapped metrically far in . The parameters are typically learned using a distance metric. In this work, we use, without loss of generality, a Margin loss [25] as a distance metric to learn the parameters 111

Note that any other distance metric can be used as the loss function for this task.

, defined as:


where is Euclidean norm between a pair of images and in the manifold space . The parameters and represent the separation margin and the boundary between the similar and dissimilar pairs, respectively. The parameter indicates whether the images in the pair are similar () or different ().

Several metric learning methods have been explored for learning the manifold space. We follow the recent state-of-the-art metric learning method in [22]. This method is motivated by the idea of divide and conquer approach, which divides a complex problem into several easier subproblems. Particularly, this method splits the manifold space and the data into multiple groups and learns each subspace with independent learners. We adopt this method for medical imaging and integrate the attention module for better learning the manifold space and thereby enhancing the derived attention maps.

2.2 Weakly Supervised Segmentation

The attention maps obtained from the manifold network using image-level labels can serve as pixel-level labels. To further refine the attention maps, a segmentation network is trained on the fake image-level labels. Specifically, we use the input image and its corresponding generated attention map as a training pair. To differentiate foreground pixels from the background pixels, we threshold the attention maps with (i.e., pixels in greater than T are set to 1, 0 otherwise) before training the segmentation network. We choose the popular segmentation network U-net [21] for our experiments. The network is trained with cross-entropy as a loss function, which is computed over a pixel-wise soft-max activation on the final feature maps, defined as


where is a segmentation network parameterized by , and , the number of categories.

3 Experiments

The performance of the proposed attention-based approach for weakly supervised segmentation is compared with GradCAM [23], as it has been applied for medical image segmentation. We generate the GradCAMs for two standard classification networks based on ResNet50 and ResNet101. Since we employ the divide and conquer approach (DCML) [22] for the underlying manifold learning pipeline, we compare with the standard metric learning (ML) method, which is trained using margin loss [25]. We also include the results of full-supervision using U-net [21], which serves as an upper bound. For a meaningful evaluation, the model architecture and other parameters are fixed across the different methods, as described in Sec. 3.2. In the following sections, the dataset composition used for training and evaluation, as well as the implementation details of our pipeline are detailed. Then, we present the quantitative and qualitative results of the proposed approach comparing with the baseline methods for weakly supervised segmentation.

3.1 Datasets

The proposed method is evaluated on the skin lesion dataset from the ISIC 2018 Challenge 222 [24, 3]. The dataset consists of two independent sets. The first dataset contains 10,015 images with seven different categories for classification. The second dataset focuses on the segmentation task and is composed of 2,594 images and their corresponding pixel-level masks. We use the classification data to generate attention maps by learning the manifold space. To this end, the dataset is split into independent 8,015 images for training and 2,000 images for testing. For the segmentation task, we leverage the attention maps generated from the classification set (i.e. 10,015 images), which are employed as proxy-labels to train the segmentation network. The segmentation dataset is randomly split into three sets: training (1,042), validation (520), and testing (1,038). We employ the validation and testing splits to evaluate all the methods. In contrast, the training set is used to train the upper-bound model, i.e., full-supervised.

3.2 Implementation details

We follow the work in [22] as the backbone architecture for learning the manifold space, which is based on ResNet-50 [7]. From this network, we use only three residual blocks to avoid a low resolution on the generated attention modules. The attention module consists of three convolution layers with

kernel and filters size of {128, 32, 1}. This module integrates a ReLU activation between the convolutional layers and a sigmoid activation in the final layer to produce an activation map. The manifold dimension size is fixed to

= 128 and an input image size of used for all our experiments. All models are trained using the Adam optimizer [11] with batch sizes of = 32. The margin loss parameters are set to = 0.2 and = 1.2, as in [25]

. In each mini-batch, 8 images per class are sampled to ensure a class-balanced scenario and experiments are trained for 300 epochs. The last 50 epochs are fine-tuned in the full embedding. For the segmentation network, we use U-net

[21] architecture with an initial kernel size of 32. It is also trained with Adam optimizer with batch sizes of 16 for the binary segmentation (). The threshold parameter is set to for all the experiments.

3.3 Evaluation of Segmentation using Dice Score

set validation test
Method init maps U-net init maps U-net
GradCAM 34.80 41.12 34.00 40.65
GradCAM 34.16 39.03 33.68 39.53
ML + Attention 56.60 58.10 56.96 59.16
DCML + Attention (ours) 60.79 63.83 62.06 66.12
Full-supervision (upperbound) - 85.90 - 86.15
Table 1: Quantitative comparison using Dice score (in %) on validation and test sets. Our method yields the best results, in bold for the weakly supervised setting. and are obtained by GradCAM on classification networks using ResNet50 and ResNet101, respectively.

We employ the Dice score to evaluate the segmentation performance of the proposed method along with baseline approaches. Table 1 reports these results for the validation and testing datasets. In this table, init maps are used to denote the raw visual salient regions from either GradCAM or the proposed method. On the other hand, U-net refers to the performance of the segmentation network trained on the init maps. First, we can observe that segmentation results obtained with the initial GradCAM are considerably low. Particularly, on both validation and testing sets, both variants of GradCAM (ResNet50 and ResNet101) achieve a Dice score of around 34%. If these raw maps are used as proxy image-labels to train a segmentation network, results are improved by 6%. However, even in this case, the performance is still insufficient, with a maximum Dice score of 41.12%. The attention maps produced in standard metric learning represent better segmentation compared to the GradCAM variants, as it achieves a Dice score of 56.6% and 56.96% on validation and test sets, respectively. The performance of this model is further improved by 2% if they are used to train a segmentation network. Last, we can observe that our method based on DCML achieves the best Dice score of 60.79% and 62.06% for raw attention maps. Furthermore, the Dice score is further improved by 3% and 4% on validation and test set, when the attention maps are used as proxy-labels. Compared to GradCAM, our visual manifold driven methods shows superior performance with an improvement of 22% Dice score due to the similarity-based metric learning. In addition, compared to the standard metric learning, our method (DCML) brings a gain of performance between 4-7% due to the subspace learning. This suggests that the proposed model generates more reliable segmentations that can be further employed to train fully-supervised networks.

Figure 2: Saliency map obtained by different method (row 1 and 3) and segmentation results obtained in weakly supervised setting (row 2 and 4). and are obtained by GradCAM on classification networks using ResNet50 and ResNet101, respectively.

3.4 Qualitative Performance Evaluation

Visual results of the different methods are shown in Fig.  2. The saliency maps (row 1 and 3) produced by GradCAM spread over the entire image, highlighting discriminative regions of the lesion but failing to capture the whole context. In contrast, attention maps derived from metric learning better capture the attentive region, mostly covering the lesion region. This shows the potential of attention maps generated by the manifold learning over GradCAM on classification networks. Additionally, compared to standard metric learning, our method captures finer details, which may be due to the multiple-subspace learning, which eases the task. The segmentation results obtained by training a segmentation network on the initial salient regions (row 1 and 3) are depicted in row 3 and 4. These images demonstrate the feasibility of our method to weakly generate pixel-level labels that can be used to train segmentation networks.

4 Conclusion

We presented a novel manifold-driven attention-based pipeline for weakly supervised segmentation using image-level labels. Our method directly produces an attention map, which serves as proxy labels for segmentation. The segmentation results outperform the state-of-the-art GradCAM methods by a margin of 22% Dice score, for an application on skin lesion images. Qualitative results demonstrate that both attention map and segmentation by our method, focusing on the target lesion, showing the effectiveness and robustness of our method. Our proposed pipeline can be easily fit in any complex weakly supervised setting, which can be explored in future work.


This research work was partly funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), Fonds de Recherche du Quebec (FQRNT), and NVIDIA Corporation with a donation of a GPU.


  • [1]

    Bearman, A., Russakovsky, O., Ferrari, V., Fei-Fei, L.: What’s the point: Semantic segmentation with point supervision. In: European conference on computer vision. pp. 549–565. Springer (2016)

  • [2] Chen, L., Chen, J., Hajimirsadeghi, H., Mori, G.: Adapting Grad-CAM for embedding networks. In: Winter Conference on Applications of Computer Vision (2020)
  • [3] Codella, N., Rotemberg, V., Tschandl, P., Celebi, M.E., Dusza, S., Gutman, D., Helba, B., Kalloo, A., Liopyris, K., Marchetti, M., et al.: Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1902.03368 (2019)
  • [4] Dolz, J., Gopinath, K., Yuan, J., Lombaert, H., Desrosiers, C., Ayed, I.B.: HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation. Transactions on Medical Imaging 38(5), 1116–1126 (2018)
  • [5] Feng, X., Yang, J., Laine, A.F., Angelini, E.D.: Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules. In: Medical Image Computing and Computer-Assisted Intervention. pp. 568–576. Springer (2017)
  • [6] Gondal, W.M., Köhler, J.M., Grzeszick, R., Fink, G.A., Hirsch, M.: Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. In: International Conference on Image Processing. pp. 2069–2073. IEEE (2017)
  • [7]

    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition. pp. 770–778. IEEE (2016)

  • [8] Jia, Z., Huang, X., Eric, I., Chang, C., Xu, Y.: Constrained deep weak supervision for histopathology image segmentation. Transactions on Medical Imaging 36(11), 2376–2388 (2017)
  • [9] Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D., Glocker, B.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical Image Analysis 36, 61–78 (2017)
  • [10] Kervadec, H., Dolz, J., Tang, M., Granger, E., Boykov, Y., Ayed, I.B.: Constrained-CNN losses for weakly supervised segmentation. Medical Image Analysis 54, 88–99 (2019)
  • [11] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations. vol. 5 (2015)
  • [12] Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with gaussian edge potentials. In: Advances in Neural Information Processing Systems. pp. 109–117 (2011)
  • [13]

    Kwak, S., Hong, S., Han, B., et al.: Weakly supervised semantic segmentation using superpixel pooling network. In: Association for the Advancement of Artificial Intelligence. pp. 4111–4117 (2017)

  • [14] Lin, D., Dai, J., Jia, J., He, K., Sun, J.: Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In: Computer Vision and Pattern Recognition. pp. 3159–3167. IEEE (2016)
  • [15] Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)
  • [16] Nguyen, H.G., Pica, A., Hrbacek, J., Weber, D.C., La Rosa, F., Schalenbourg, A., Sznitman, R., Cuadra, M.B.: A novel segmentation framework for uveal melanoma in magnetic resonance imaging based on class activation maps. In: Medical Imaging with Deep Learning. pp. 370–379 (2019)
  • [17] Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Is object localization for free?-weakly-supervised learning with convolutional neural networks. In: Computer Vision and Pattern Recognition. pp. 685–694. IEEE (2015)
  • [18]

    Papandreou, G., Chen, L.C., Murphy, K., Yuille, A.L.: Weakly-and semi-supervised learning of a DCNN for semantic image segmentation. In: International Conference on Computer Vision. IEEE (2015)

  • [19] Pinheiro, P.O., Collobert, R.: From image-level to pixel-level labeling with convolutional networks. In: Computer Vision and Pattern Recognition. pp. 1713–1721. IEEE (2015)
  • [20] Rajchl, M., Lee, M.C., Oktay, O., Kamnitsas, K., Passerat-Palmbach, J., Bai, W., Damodaram, M., Rutherford, M.A., Hajnal, J.V., Kainz, B., et al.: Deepcut: Object segmentation from bounding box annotations using convolutional neural networks. Transactions on Medical Imaging 36(2), 674–683 (2016)
  • [21] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention. pp. 234–241. Springer (2015)
  • [22] Sanakoyeu, A., Tschernezki, V., Buchler, U., Ommer, B.: Divide and conquer the embedding space for metric learning. In: Computer Vision and Pattern Recognition. pp. 471–480. IEEE (2019)
  • [23] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: Visual explanations from deep networks via gradient-based localization. In: International Conference on Computer Vision. pp. 618–626. IEEE (2017)
  • [24] Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data 5, 180161 (2018)
  • [25] Wu, C.Y., Manmatha, R., Smola, A.J., Krahenbuhl, P.: Sampling matters in deep embedding learning. In: International Conference on Computer Vision. pp. 2840–2848. IEEE (2017)
  • [26] Wu, K., Du, B., Luo, M., Wen, H., Shen, Y., Feng, J.: Weakly supervised brain lesion segmentation via attentional representation learning. In: Medical Image Computing and Computer-Assisted Intervention. pp. 211–219. Springer (2019)
  • [27]

    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Computer Vision and Pattern Recognition. pp. 2921–2929. IEEE (2016)