Log In Sign Up

Adversarial-Prediction Guided Multi-task Adaptation for Semantic Segmentation of Electron Microscopy Images

by   Jiajin Yi, et al.
NetEase, Inc

Semantic segmentation is an essential step for electron microscopy (EM) image analysis. Although supervised models have achieved significant progress, the need for labor intensive pixel-wise annotation is a major limitation. To complicate matters further, supervised learning models may not generalize well on a novel dataset due to domain shift. In this study, we introduce an adversarial-prediction guided multi-task network to learn the adaptation of a well-trained model for use on a novel unlabeled target domain. Since no label is available on target domain, we learn an encoding representation not only for the supervised segmentation on source domain but also for unsupervised reconstruction of the target data. To improve the discriminative ability with geometrical cues, we further guide the representation learning by multi-level adversarial learning in semantic prediction space. Comparisons and ablation study on public benchmark demonstrated state-of-the-art performance and effectiveness of our approach.


Domain Adaptation for Structured Output via Discriminative Representations

Predicting structured outputs such as semantic segmentation relies on ex...

Domain Adaptation for Structured Output via Discriminative Patch Representations

Predicting structured outputs such as semantic segmentation relies on ex...

Instance Segmentation of Unlabeled Modalities via Cyclic Segmentation GAN

Instance segmentation for unlabeled imaging modalities is a challenging ...

EM-NET: Centerline-Aware Mitochondria Segmentation in EM Images via Hierarchical View-Ensemble Convolutional Network

Although deep encoder-decoder networks have achieved astonishing perform...

Domain Adaptive Segmentation of Electron Microscopy with Sparse Point Annotations

Accurate segmentation of organelle instances, e.g., mitochondria, is ess...

Deep and Wide Multiscale Recursive Networks for Robust Image Labeling

Feedforward multilayer networks trained by supervised learning have rece...

1 Introduction

The accurate segmentation of electron microscopy (EM) images is an essential step for understanding brain’s neuronal structures

[2, 3]

. Current state-of-the-art methods for medical image segmentation are supervised trained fully convolutional neural networks in

encoder-decoder architecture, across which a common obstacle is the severe dependence on large amounts of pixel-wise labeled data. However, annotating EM images is labor-intensive and time-consuming, which makes it difficult to obtain a large number of labeled EM images. Instead of re-annotating in each domain, generalizing the model trained on the dataset with enough supervising labels (source domain) well to a novel dataset without labels (target domain) is an appealing alternative approach to mitigate the training difficulties of unlabeled dataset.

To reduce domain discrepancy, there have been a lot of studies focusing on unsupervised domain adaptation (UDA) in recent years with the aim to learn domain-invariant representation/model. One frequently-used strategy is to learn domain-invariant features through distribution alignment. For example, maximum mean discrepancy (MMD) metric and correlation loss were used in [4] and [5], respectively, to match the feature distributions of different domains. In [6, 7], a adversarial domain discriminator was introduced to align the features outputted by the feature encoder, which is more effective than directly using statistical metrics [4]. In [8], Ghifary et al. proposed to learn a shared feature encoder by augmenting the supervised segmentation task on the source domain with an auxiliary task for the reconstruction of the unlabeled target data. Similar idea was applied by Roels et al. [9] for EM image segmentation. However, since the decoding representation learning is only guided by labels on source domain, the decoding representation may not generalize well on target domain. Recently, Tsai et al. [10] proposed label-space adaptation based on adversarial learning for multi-label video segmentation. The work is based on the observation that the different domains share similar label spaces, which have rich spatial layouts and local context information in the scenario of multi-class segmentation. However, for the complicated binary segmentation task, with the label-level adaptation strategy the learned domain-invariant feature representation is less constrained by the rich visual cues of the images at target domain. Recently, a semi-supervised domain adaptation method was proposed in [2] by applying MMD at the final stage of the decoder. However, this method cannot be applied in our unsupervised scenario, since the MMD loss is unsupervised. Despite these efforts, the performances of UDA are still limited for pixel-wise segmentation tasks with complex image appearance and ambiguous boundaries.

To address these above issues, we propose an Adversarial-Prediction guided Multi-task Adaptation Net (APMA-Net) in the encoder-decoder architecture. Given a supervised learned encoder-decoder on the source domain, our aim is to adapt it to the target domain with images having a different visual and structure style. Thereby, we learn shared encoding representations by seeking better reconstruction of the images of both domains. To improve the discriminative ability of both the encoding and decoding representations on the unlabeled target domain, we further guide the representation learning process by the similarity in geometric cues between the two domains, which is captured by multi-level adversarial discriminators in semantic prediction and feature spaces. In this way, the adaptation is jointly guided by both geometrical and visual information, and the distribution alignment happens at multiple representation layers. Compared with methods [8, 9] only adapting encoding features with auto-encoder, the proposed method adapts to learn domain invariant features more specific to discriminative task on the target domain.

2 Method

Our objective is to learn a sound label predictor for the unlabeled target domain with the help of a labeled source domain. The overview of our proposed adversarial-prediction guided multi-task adaptation net (APMA-Net) is shown in Fig. 1.

To mitigate the domain gap, we perform joint adaptation on two stages, namely encoding stage and decoding stage, in the paradigm of multi-task learning. Concretely, we learn features guided by two tasks: 1) supervised segmentation on the source domain using U-Net [1] as the backbone encoder-decoder; 2) unsupervised reconstruction of images on the two domains using an auto-encoder following the idea of [8, 9]. The two tasks share the same encoder. Although the reconstruction branch is unsupervised and not targeted for learning discriminative features, it aims to capture crucial visual and structure cues in the image space of target domain for the adaptation. To make the learned features discriminative on target domain and thus learn good cross-domain label predictor, multi-level domain-adversarial adaptations on decoding feature space and prediction space are conducted on decoding stage. Note that, without the guidance of image reconstruction, the learned features and the cross-domain predictor may contain less information from target image space, which is the most reliable information available about the target domain.

Let the source domain be a set of labeled images, and the target domain be a novel set of unlabeled images. We define the labeled sample of the source data as , and the corresponding label as . Similarly, the sample of unlabeled target data is defined as . Moreover, we denote feature maps preceding the output layer of our APMA-Net for source data and target data as and , respectively. The final predictions for source and target images and are denoted by and , respectively.

Encoding stage adaptation. To learn feature representation capable of encoding enough visual and structure information from both the source and target domains, we augment an auto-encoder (denoted as ) to the main segmentation generator network (denoted as ). Our aim is to make the shared encoder of and learn not only the information of the source domain but also the visual information on the target domain. Specifically, the shared encoder takes both the source data and target data as inputs. When feeding and to the branch, the decoder of finally produces predictions and , respectively, and the branch produces the reconstructed images and , respectively. It is expected that the reconstructed images and are close to the corresponding original images and respectively, while the prediction of is close to the corresponding label . We use cross-entropy loss for source prediction measurement, and mean squared loss for the reconstruction measurement, which are defined as follows,


where spatial dimensions are omitted for simplicity in Eq. (1).

Note that the auxiliary reconstruction strategy as Eq. (2) has been exploited in many tasks including domain adaptation [8, 9]. In our model, however, we learn to adapt domains simultaneously with the guidance from image space and the guidance from label space. In this way, we can bias the cross-domain discriminative predictor better to the target domain.

Decoding stage adaptation. The segmentation generator is also shared by the two domains (Fig. 1). Taking either the encoded representation of a source image or a target image as input, we can obtain the final feature maps , and label predictions , , respectively. So far, we have just used image information from the target domain and label information from only the source domain to learn compact features, however, we may still lack the ability of discrimination on the target domain. Moreover, the decoding representations in are still less biased to the target domain.

To improve discriminative ability of both the encoding and decoding representations on the target domain, we introduce multi-level adversarial learning on both prediction and feature spaces (Fig. 1): 1) at the space of structured prediction, we leverage a fully convolutional domain discriminator by following the similar idea of [10]; 2) at the space of decoding representation, we apply a domain discriminator on the feature maps of the ending decoding layer.

More specifically, we utilize two domain discriminators and for feature alignment at multi-level prediction spaces. The discriminator is used for the final prediction adaptation (see Fig. 1) and is used for the adaptation of the final decoding representations preceding the output layer. The two discriminators are trained to distinguish domain labels of the inputs from the different domains. With domain-level supervision, the discriminators and are learned by minimizing the following loss,


in which the image width and height dimensions are omitted for simplicity in Eq. (4). This process of multi-level adversarial learning enables the encoding and decoding representation to be more discriminative on the unlabeled target domain.

The segmentation-reconstruction network is learned through minimizing the following loss function,


in which , and are trade-off weights and


where we use inverted domain labels to obtain a loss function with lower bound. To train the proposed model, we iteratively optimize the problem in Eq. (3) and Eq. (6). Practically, we firstly use the annotated source images to train an initial .

3 Results

Given an annotated domain, our evaluation task is unsupervised segmentation of mitochondria in EM images from a novel domain with severe domain shift. We use the well annotated EPFL Data111 as the source domain, which is scanned by Focused Ion Beam Scanning EM (FIB-SEM). It is an image stack of size 165 1024 768 taken from CA1 hippocampus region of a mouse brain. For target domain dataset, we use an image stack of size 20 1024 1024 [11], which is acquired by serial section Transmission EM (ssTEM). This image stack is taken from the Drosophila melanogaster third instar larva Ventral Nerve Cord. Since the target domain dataset contains a small number of serial sections, we split the target dataset along the axis, with 67 for training and 33 for testing.

Figure 2: Visual comparison of different segmentation results.

We implement our network using PyTorch on a 1080Ti GPU. For our proposed network, we use the Adam

[12] optimizer with the learning rate as 2 and polynomial decay with a power of 0.9. The trade-off parameters are set as =, .

We compare our method with 1) No adaptation that uses a U-Net trained only on the source domain to segment the target domain; 2) Roels [9] which aligns the encoder features of source and target domains by adding a reconstruction decoder to the U-Net; 3) Ganin [6] that learns domain-invariant feature by using an adversarial domain discriminator.

Figure 2 shows visual comparison results, in which falsely detected regions and missing regions by No adaptation, Roels [9] and/or Ganin [6] are highlighted with orange arrows. Our method shows obviously reduced detection error. However, there are also regions (highlighted with yellow arrow) that are failed to be detected by all methods. The quantitative results are shown in Table 1

, which are evaluated using Jaccard index (JAC) and Dice similarity coefficients (DSC). Our proposed method yields an accuracy of 53.6

in JAC and 69.8 in DSC, respectively, which is significantly better than other unsupervised domain adaptation methods.

Experiments DSC() JAC()
No adaptation 45.3 29.3
Roels [9] 66.4 49.7
Ganin [6] 66.7 50.0
Our APMA-Net 69.8 53.6
Table 1: Comparison with state-of-the-art methods for unsupervised mitochondria segmentation in EM images.

To validate the effectiveness of our joint encoding and decoding adaptation approach, five ablated versions of our model have been compared: 1) only ENcoding stage adaptation with Auto-Encoder (); 2) only the final DEcoding features adaptation (); 3) only the DEcoding prediction adaptation (); 4) the combination of 1) and 2) (+); 5) our final model (APMA-Net). The evaluation results are shown in Table 2. In comparison with No adaptation, it is obviously that the method with the outperforms it by a large margin, which indicates that the learned label predictor is more biased towards the target domain. The performances of and are both superior than No adaptation and , which indicates that the effectiveness of the prediction space adaptation. Furthermore, the combination of and can obviously perform better than using them separately, which is further improved by adding (i.e. APMA-Net). All above results show that the adaptation which is guided jointly by label domain and image information is an effective way to mitigate the domain gap.

4 Conclusion

In this paper, we proposed a multi-task adaptation method to address the problem of unsupervised segmentation of EM images. To improve the discriminative ability of the cross-domain label predictor on the unlabeled domain, we adopted the multi-level adversarial learning in semantic prediction space to leverage domain-level label information. Experimental results showed that our proposed method can achieve state-of-the-art performance in accuracy and visual quality.

Experiments DSC() JAC()
66.4 49.7
67.6 51.0
67.4 50.8
+ 68.3 51.8
+ (APMA-Net) 69.8 53.6
Table 2: Ablation study of our APMA-Net. : encoding stage adaptation by auto-encoder; : decoding stage adaptation by aligning features; : decoding stage adaptation by aligning predictions.


  • [1] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceding of Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234–241.
  • [2] R. Bermúdez-Chacón, P. Márquez-Neila, M. Salzmann, and P. Fua, “A domain-adaptive two-stream u-net for electron microscopy image segmentation,” in International Symposium on Biomedical Imaging, 2018, pp. 400–404.
  • [3] J. Peng and Z. Yuan, “Mitochondria segmentation from em images via hierarchical structured contextual forest,” IEEE Journal of Biomedical and Health Informatics, 2020.
  • [4] M. Long, Y. Cao, and J. Wang, “Learning transferable features with deep adaptation networks,” in

    International conference on Machine Learning

    , 2015, pp. 97–105.
  • [5] B. Sun and K. Saenko, “Deep coral: Correlation alignment for deep domain adaptation,” in

    European Conference on Computer Vision

    , 2016, pp. 443–450.
  • [6] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, and H. Larochelle, “Domain-adversarial training of neural networks,” Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016.
  • [7] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in

    International Conference on Computer Vision and Pattern Recognition

    , 2017, pp. 7167–7176.
  • [8] M. Ghifary, W.B. Kleijn, M. Zhang, D. Balduzzi, and W. Li, “Deep reconstruction-classification networks for unsupervised domain adaptation,” in European Conference on Computer Vision, 2016, pp. 597–613.
  • [9] J. Roels, J. Hennies, Y. Saeys, W. Philips, and A. Kreshuk, “Domain adaptive segmentation in volume electron microscopy imaging,” in International Symposium on Biomedical Imaging, 2019, pp. 1519–1522.
  • [10] Y. Tsai, W. Hung, S. Schulter, K. Sohn, M. Yang, and M. Chandraker, “Learning to adapt structured output space for semantic segmentation,” in International Conference on Computer Vision and Pattern Recognition, 2018, pp. 7472–7481.
  • [11] S. Gerhard, J. Funke, J. Martel, A. Cardona, and R. Fetter, “Segmented anisotropic sstem dataset of neural tissue,” Figshare, 2013.
  • [12] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980.