Histopathology is key to many clinical decisions taken in oncology, based on the visual quantification of biomarkers on stained slides of suspected tumor tissue. In a clinical setting, the PD-L1 Tumor Cell (TC) score for Non Small Cell Lung Cancer (NSCLC) is for instance predictive of response for patients treated with an anti-PD1/PD-L1 checkpoint inhibitor therapy . Several exploratory studies have moreover shown that both tumor immune contexture  and epithelial immune cell infiltration are predictive of patient prognosis . All these examples rely on an accurate segmentation of the epithelial compartment. The non-specificity of the PD-L1 staining, which includes epithelial regions but also immune cells and necrotic regions (cf. Fig. 1b) makes this task challenging. This difficulty, together with the demonstrated performance of deep learning methods in digital pathology image analysis [7, 3, 6] leads us towards this set of methods, and more particularly towards deep semantic segmentation networks 
. The prerequisite dataset of boundary-precise manual annotations is, however, time-consuming and costly to generate. Here, and because CK labels epithelium, we semi-automatically build the prerequisite dataset on CK images using coarse manual input combined with simple heuristic segmentation rules. By transferring the segmentation masks and the CK images into the PD-L1 stain domain using unpaired image-to-image translation, we generate a synthetic CK-based PD-L1 dataset which is merged with manual annotations on true PD-L1 images and used for training the PD-L1 epithelial semantic segmentation network.
We exploit recent advances in generative adversarial networks (GANs), especially of unpaired image-to-image translation using CycleGAN  as used for the normalization of HE images . This makes the transformation of CK images into synthetic PD-L1 images possible without the need for serial sections nor re-staining . Here, we present what is to our knowledge the first application of domain adaptation based semantic segmentation in the field of digital pathology. Other studies described similar ideas for medical image analysis, e.g. to convert CT images into synthetic MRI images and train a segmentation network on both real and synthetic MRI images[2, 5], but follow a two-step methodology. Instead, we introduce an end-to-end trainable network (cf. Fig. 1c) named DASGAN (Domain Adaptation and Segmentation GAN) that jointly performs unpaired image-to-image translation and semantic segmentation. We show the superiority of the introduced network against (i) networks trained only on manual annotations of real PD-L1 images and (ii) networks trained separately for domain adaptation and for semantic segmentation.
The good specificity of CK staining makes the segmentation of epithelial regions in CK images possible using color deconvolution followed by Otsu thresholding and closing morphological operations (cf. Fig. 1a). While the newly introduced network for joint unpaired domain adaptation and semantic segmentation theoretically makes it possible to combine manual or automated annotations from any two stain domains and independent cohorts, we apply it here to transfer images from the CK domain to PD-L1 domain and to leverage the epithelial segmentation masks in the CK domain as annotations in the PD-L1 domain.
Two generators and are trained to synthesize samples in domain (PD-L1) from real samples in domain (CK) and vice versa. Two discriminators and are trained in opposition to identify synthetic from real samples in the two domains. The parameters of the two discriminator and two generator networks are learned in an adversarial manner following a min-max game on the two adversarial losses and :
The necessity of having image pairs for image translation between and is bypassed using a cycle consistent loss . The cycle loss is defined to prevent mode collapse of the two GAN models and to constrain the invertability of the translated domains, based on the translation of the synthesized samples and back to their original domains and :
Following the auxiliary class GAN (AC-GAN) model , we extend the CycleGAN model  to obtain segmentation maps as auxiliary from the two discriminators and (cf. Fig. 1c). We condition the input images of and with the respective ground truth segmentation class mask by concatenating the mask across the input image channel axis. The respective concatenated volumes go through a series of transformations by and to produce synthetic images in the respective target domains and
. The two discriminator networks are extended to predict pixel-wise class probability maps in addition to predicting the correct source of image. To this end, and to propagate the class specific information through to the generator, a segmentation loss is introduced to the discriminator in addition to the original adversarial loss:
where denotes the categorical cross-entropy loss and where and correspond to the ground truth and the predicted label maps respectively. This results in the following loss for the proposed joint domain adaptation and semantic segmentation DASGAN model:
with and weighting the losses associated with the cycle constrain and the segmentation auxiliary task respectively. The proposed DASGAN model is used, at training training time, to leverage annotations on CK stained images for the segmentation of epithelial regions in PD-L1 stained images. While only the discriminator is employed at prediction time, the use of a symmetric discriminator ensures the balancing of the two counterplaying GAN networks.
2.3 Extension to Tumor Cell scoring
To differentiate between PD-L1 positive and PD-L1 negative tumor epithelial regions, as requisite for the calculation of the TC score, we perform three-class pixelwise mask conditioning. As shown in Fig. 1(a), we transform each CK binary segmentation mask into two examples of both a PD-L1 negative and a PD-L1 positive epithelial masks. Given a CK binary segmentation mask, a PD-L1 negative epithelium mask is built by giving the labels 0 and 1 to the non-epithelium and the epithelium regions, respectively, yielding the epithelium regions in the CK image to be conditioned to become PD-L1 negative. Similarly, a PD-L1 positive epithelium mask is built from the same CK mask by giving the labels 0 and 2 to the non-epithelium and epithelium regions, respectively, yielding the epithelium regions in the CK image to be conditioned to become PD-L1 positive.
After describing the training, test and validation datasets as well as the network architectures, we present results of quantitative evaluation against pathologist manual annotations for the two problems of (i) epithelial segmentation and (ii) PD-L1 positive and negative epithelial region detection.
3 Experiments and Results
3.1 Cytokeratin and PD-L1 Datasets
The training set consists of CK stained whole slide images (WSI) of NSCLC samples and of WSIs of the same indication and stained with the SP263 PD-L1 clone. The CK images and the PD-L1 images are unpaired and come from two independent patient cohorts. To ensure purity of the samples generated on the CK stained slides, the samples are exclusively created on tumor center regions delineated by pathologists and regions with non-specific staining are discarded. Because this manual input is typically provided at a macroscopic scale (1x or 2x), the associated effort is minimal compared to the annotation of fine epithelium structure at high resolution (20x). Positive () and negative () epithelium regions were partially delineated at high resolution on PD-L1 images, together with non-epithelium regions (e.g. immune, necrotic, and stromal regions). Patches of pixels are uniformly sampled from the annotated regions on the PD-L1 stained slides and from the detected epithelium regions on the CK stained slides, using a 10 resolution (/px).
The test set, similarly generated from partially annotated PD-L1 stained WSIs, is used for selecting the model maximizing the segmentation f1 score.
The validation set, consisting of fields of view () selected from PD-L1 stained WSIs is densely annotated by pathologists. It is solely employed for quantitative evaluation of the segmentation accuracy and was selected to cover a high variability of different cancer types (adeno, squamous) and growth pattens (acinar, papillary and solid).
To study the impact of on the segmentation accuracy, we report results with three different configurations for the training set: (i) 44K patches from slides, (ii) 103K patches from slides and (iii) 149K patches from slides, all patches from (i) being included in (ii) and those of (ii) in (iii). The CK-based training set, the test set, as well as the validation set remain unchanged in these experiments.
3.2 Network architectures
The network of the original CycleGAN paper is modified as follows . The two generators take as input the concatenation of the images and of their respective segmentation masks. For the two discriminators, weights between the prediction of the source distribution and of the semantic segmentation posterior maps are shared in the first three convolutional layers and the branch for semantic segmentation extended to include three resnet blocks and three deconvolutional layers. Spectral normalization  and self-attention blocks 
are added in the discriminators and generators to increase training stability and to model long structural dependencies respectively. Network definition, training and inference are performed using the Tensorflow library. All models are trained on a single Nvidia V100 GPU with 32GB of memory and Adam optimization performed for both the generators (lr=1e-4, beta1=0.5) and the discriminators (lr=5e-4, beta1=0.5) for 150k iterations. Because the same architectureis used by all networks for segmentation, the prediction time is the same for all networks: 0.08 sec for pixels is measured on Nvidia K80 GPU.
3.3 Segmentation performance
Segmentation accuracy is reported on the unseen validation set. We first consider the configuration (i) corresponding to a relative shortage of manual annotations. As illustrated in Fig. 2, the proposed DASGAN outperforms the two models trained solely on real or synthetic PD-L1 images as well as the two-step model trained on real and synthetic PD-L1 images. Mean f1 scores of are reported for the DASGAN on the binary problem of epithelium detection and on the three class problem of positive and negative epithelial regions respectively. Surprisingly, the two-step approach does not improve the segmentation results () compared to training only on real PD-L1 images (). A possible explanation is that, while the transformation between the two stain domains is fixed in the two-step methodology, the DASGAN enables the domain transfer network to be optimized with the objective to not only generate realistic PD-L1 images but also to ensure that the generated images improves the performance of the segmentation network.
As shown in Fig. 3, while the proposed DASGAN model systematically outperforms the baseline model trained only on real PD-L1 samples, the relative improvement in accuracy metrics tends to decrease with the availability of manual annotations. In the configuration (iii) of highest availability, accuracy metrics of and are reached by the baseline model trained only on real PD-L1 samples and by our approach respectively. This quantitatively confirms the expectation that the use of synthetic data is most relevant in case of relative shortage of manually labeled data.
3.4 Tumor Cell scoring
PD-L1 status, which is predictive for survival of NSCLC patients receiving PD1/PD-L1 checkpoint inhibitor therapy , is determined based on the Tumor Cell score, defined as the percentage of tumor epithelial cells that are PD-L1 positive. Following , the TC score is approximated as the relative area of the detected TC(+) regions:
Fig. 4a displays an example of epithelial segmentation output by the proposed DASGAN model. To quantitatively assess the clinical relevance of the proposed approach, we consider a set of 704 PD-L1 stained images unseen for training nor selection of the segmentation model. This set originates from three independent patient cohorts and contain both needle biopsies and resectates. Fig. 4
b shows the bar plot of mean and standard deviation of the estimatedscores against the true TC scores visually estimated by pathologists. Lin’s concordance coefficient of , Pearson correlation coefficient of and mean absolute error of are reported between the estimated and the true TC score values, quantitatively showing the high concordance of the proposed method with visual scoring by pathologist.
4 Discussion and Conclusion
In this paper, we introduce a novel method to leverage data from two stain domains (CK and PD-L1) and two independent cohorts for the segmentation of epithelium in PD-L1 images. The semi-automatic generation of large boundary-precise datasets for epithelium segmentation in CK images together with their unpaired translation into realistic-looking images PD-L1 images makes it possible to generate large dataset for epithelial segmentation in the PD-L1 stain domain, without the need for serial sections or re-staining of slides.
The proposed DASGAN model performs joint domain translation and semantic segmentation. As experimentally shown, it enables (i) the segmentation of the epithelial compartment, (ii) the segmentation of PD-L1 positive and PD-L1 negative epithelial regions, (iii) the replication of the PD-L1 Tumor Cell (TC) score. Upon confirmation that the results match survival predictive ability of manual scoring and replication of the findings in a prospective trial, we envision that the PD-L1 SegNet of the DASGAN network could be deployed in a clinical setting to identify patients which may benefit from an anti-PD1/PD-L1 therapy. More generally, we believe that the novelty of the DASGAN for joint domain adaptation and segmentation and its demonstrated performance against the two-step approach makes it of interest to the medical image analysis community beyond the sole analysis of PD-L1 stained histopathology images.
-  Al-Shibli, K.I., Donnem, T., Al-Saad, S., Persson, M., Bremnes, R.M., Busund, L.T.: Prognostic effect of epithelial and stromal lymphocyte infiltration in non–small cell lung cancer. Clinical cancer research 14(16), 5220–5227 (2008)
-  Chartsias, A., et al.: Adversarial image synthesis for unpaired multi-modal cardiac data. In: Intl Wksp on Simulation and Synthesis in Medical Imaging (2017)
-  Cruz-Roa, A., et al.: Accurate and reproducible invasive breast cancer detection in whole-slide images: A deep learning approach for quantifying tumor extent. Scientific reports 7, 46450 (2017)
-  Fridman, W.H.e.a.: The immune contexture in cancer prognosis and treatment. Nature reviews Clinical oncology 14(12), 717 (2017)
-  Jiang, J., et al.: Tumor-aware, adversarial domain adaptation from ct to mri for lung cancer segmentation. In: MICCAI (2018)
-  Kapil, A., Brieu, N., et al.: Deep semi supervised generative learning for automated tumor proportion scoring on nsclc tissue needle biopsies. Scientific reports (2018)
-  Litjens, G., et al.: Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific reports 6, 26286 (2016)
-  Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018)
-  Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: ICCV. pp. 1520–1528 (2015)
-  Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585 (2016)
-  Rebelatto, M.C., et al.: Development of a programmed cell death ligand-1 immunohistochemical assay validated for analysis of non-small cell lung cancer and head and neck squamous cell carcinoma. Diagnostic pathology 11(1), 95 (2016)
-  Shaban, M.T., Baur, C., Navab, N., Albarqouni, S.: Staingan: Stain style transfer for digital histological images. arXiv preprint arXiv:1804.01601 (2018)
-  Tellez, D., et al.: Whole-slide mitosis detection in h&e breast histology using phh3 as a reference to train distilled stain-invariant convolutional networks. IEEE transactions on medical imaging 37(9), 2126–2136 (2018)
-  Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318 (2018)
-  Zhu, J.Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint (2017)