SeATrans: Learning Segmentation-Assisted diagnosis model via Transforme

06/12/2022
by   Junde Wu, et al.
IEEE
0

Clinically, the accurate annotation of lesions/tissues can significantly facilitate the disease diagnosis. For example, the segmentation of optic disc/cup (OD/OC) on fundus image would facilitate the glaucoma diagnosis, the segmentation of skin lesions on dermoscopic images is helpful to the melanoma diagnosis, etc. With the advancement of deep learning techniques, a wide range of methods proved the lesions/tissues segmentation can also facilitate the automated disease diagnosis models. However, existing methods are limited in the sense that they can only capture static regional correlations in the images. Inspired by the global and dynamic nature of Vision Transformer, in this paper, we propose Segmentation-Assisted diagnosis Transformer (SeATrans) to transfer the segmentation knowledge to the disease diagnosis network. Specifically, we first propose an asymmetric multi-scale interaction strategy to correlate each single low-level diagnosis feature with multi-scale segmentation features. Then, an effective strategy called SeA-block is adopted to vitalize diagnosis feature via correlated segmentation features. To model the segmentation-diagnosis interaction, SeA-block first embeds the diagnosis feature based on the segmentation information via the encoder, and then transfers the embedding back to the diagnosis feature space by a decoder. Experimental results demonstrate that SeATrans surpasses a wide range of state-of-the-art (SOTA) segmentation-assisted diagnosis methods on several disease diagnosis tasks.

READ FULL TEXT VIEW PDF
08/05/2022

Calibrate the inter-observer segmentation uncertainty via diagnosis-first principle

On the medical images, many of the tissues/lesions may be ambiguous. Tha...
02/28/2022

EdgeMixup: Improving Fairness for Skin Disease Classification and Segmentation

Skin lesions can be an early indicator of a wide range of infectious and...
05/31/2022

Progressive Multi-scale Consistent Network for Multi-class Fundus Lesion Segmentation

Effectively integrating multi-scale information is of considerable signi...
08/31/2019

Gland Segmentation in Histopathology Images Using Deep Networks and Handcrafted Features

Histopathology images contain essential information for medical diagnosi...
12/06/2018

Pathological Evidence Exploration in Deep Retinal Image Diagnosis

Though deep learning has shown successful performance in classifying the...
06/20/2018

Histological images segmentation of mucous glands

Mucous glands lesions analysis and assessing of malignant potential of c...
07/28/2022

Extraction of Vascular Wall in Carotid Ultrasound via a Novel Boundary-Delineation Network

Ultrasound imaging plays an important role in the diagnosis of vascular ...

1 Introduction

Clinically, the disease diagnosis is usually conducted based on critical biomarkers derived from an analysis of the images. For example, on fundus images, the vertical Cup-to-Disc Ratio (vCDR) parameter computed from the optic cup/disc (OD/OC) masks is one of the most important clinical parameters for the glaucoma diagnosis [12]. In melanoma diagnosis, an unusual shape of the skin lesions is a major biomarker indicating melanoma [11]. In order to derive these important biomarkers, an essential step is to identify lesions or tissues in an image and segment these areas of interest from the rest of the image [18, 9, 34].

Motivated by this observation, methods have been proposed to utilize segmentation information to facilitate the automated disease diagnosis [10, 36, 31, 19, 4, 30, 33]. The common practices include region of interest (ROI) extraction [10, 4], input concatenation, channel attention [19, 36]

, and transfer learning

[31]. These methods have two main limitations. First, the methods proposed for specific medical tasks are not general enough. They are often inapplicable or have unsatisfactory performance on other medical tasks. Second, most methods simply assume that the segmentation and diagnosis features are regional correlated, which is an invalid assumption in most cases. Traditional techniques they applied, like convolution layers and channel attentions are difficult to model this non-regional feature interaction, since these tools are largely local-focused. With the rise of vision transformer [7], such a research gap can be possibly addressed by its global and dynamic nature [22].

In this paper, we propose a novel transformer-based model to better capture the interaction of segmentation and diagnosis features. In order to address the scale-level discrepancy between segmentation and diagnosis features, we propose asymmetric multi-scale interaction to correlate multi-scale segmentation features with each single low-level diagnosis feature. A one-to-one coarse interaction and a one-against-rest fine-grain interaction are fused to produce the final feature. An effective approach, called SeA-block, is proposed to model the segmentation-diagnosis interaction, which is constructed by an encoder-decoder pair. The encoder first embeds the diagnosis feature through the calculated segmentation affinity map. Then a decoder maps the embeddings back to the diagnosis feature space through the calculated diagnosis affinity map. Through SeA-block, diagnosis features can be vitalized by the correlated segmentation information.

In brief, we have made three major contributions. First, we propose a general segmentation-assisted diagnosis model, named SeATrans, for integrating segmentation and diagnosis based on medical images. Thanks to the global and dynamic nature of transformer mechanism, SeATrans can achieve superior and robust performance comparing with state-of-the-art counterparts. Second, we propose asymmetric multi-scale interaction to correlate each low-level diagnosis feature with multi-scale segmentation features. In this way, diagnosis feature can be vitalized by both coarse and fine-grain segmentation information. Last but not the least, we propose a new strategy, i.e., SeA-block, for the segmentation-diagnosis interaction. A transformer-based encoder-decoder architecture is constructed to learn across the segmentation and diagnosis feature space. The experimental results show SeATrans outperforms previous best method by at least a 2 % AUC over three different disease diagnosis tasks. Meanwhile, it shows competitive robustness to the domain shift of segmentation model.

2 Methodology

In this paper, we propose a general segmentation-assisted diagnosis framework. Given a raw image and its lesions/tissues segmentation features extracted from a segmentation network (joint-trained or pre-trained), our goal is to predict the disease (0 for benign, 1 for malignant) of the image. Our basic idea is to integrate the segmentation information into diagnosis model on the feature level. The interaction module and diagnosis model are jointly optimized to predict the correct diagnosis. An illustration of the overflow is shown in Fig. 1 (a). Raw fundus image

is first sent into a UNet to obtain the deep segmentation embedding. The segmentation features in the UNet decoder are used to interact with the diagnosis features of a disease diagnosis network. In the diagnosis model, convolution layer and SeA-block based Interaction alternatively abstracts and vitalizes the features. The final disease probability is supervised by the binary disease label through binary cross-entropy (BCE) loss function.

Figure 1: An illustration of SeATrans framework, which starts from (a) an overview of the processing pipeline, and continues with zoomed-in diagrams of individual modules, including (b) the Asymmetric Multi-scale Interaction and (c) the SeA-block.

2.1 Asymmetric Multi-scale Interaction

Note that the diagnosis network abstracts the low-level structure features to the deep semantic features, while the segmentation model abstracts multi-scale structural features. In order to align the diagnosis and segmentation features, we correlate multi-scale segmentation features to each single low-level diagnosis feature. As shown in Fig. 1 (b), stacked multi-scale segmentation features are collected for a single low-level diagnosis feature. The segmentation feature with the largest scale will first interact with the target diagnosis feature. As large-scale feature contains more specific but artifact structure information [29], this one-to-one interaction will produce a coarse vitalized diagnosis feature. Other segmentation features with smaller scales are fused together for the interaction with the diagnosis feature. Since these features contain more fine-grained and abstract features, this interaction will produce a fine-grained vitalized diagnosis feature. The coarse and fine-grained features are fused by convolution layer to produce the final result.

In practice, the second and third layers of the diagnosis model will interact with the multi-scale features in UNet decoder. Consider the deep segmentation feature and diagnosis feature are and . To instill segmentation information into ( is the index of layer, ,,, are height, width, down-sample rate and channel number respectively), stacked multi-scale segmentation features ( is the number of layers) are collected for the interaction. First, will interact with by SeA-block for coarse vitalization. Then the subsequent segmentation features will be rearranged by pixel shuffle [26] to the scale of and concatenated together. Then it will interact with diagnosis feature for the fine-grained interaction. The fine-grained feature and coarse feature are integrated by convolution kernel to obtain the final vitalized diagnosis feature with shape . Then a residual convolution block [16] with pooling layer is connected to abstract the next feature .

2.2 SeA-block

SeA-block is adopted for the segmentation-diagnosis feature interaction. The architecture of SeA-block is shown in Fig. 1 (c). The proposed SeA-block contains an encoder and a decoder. The encoder embeds the diagnosis feature according to its affinity with segmentation feature, which is implemented with the multi-head dot-product attention mechanism (MHA) [28]. Formally, consider encoding a diagnosis feature with segmentation feature , we use as query and as key and value of the attention, which can be formulated as:

(1)

where are positional encodings [5] for segmentation feature and diagnosis feature respectively. The features are all reshaped into a sequence of flattened patches following ViT. In this attention mechanism, the normalized affinity weights is first calculated between query and key to reflect the correlation between diagnosis and segmentation feature globally. Then the affinity weights are used to select and reinforce the diagnosis feature through the dot production of value. After the attention, the Layer Normalization [3]

with residual connection is applied before and after the MLP layer. The embedded diagnosis feature, which we denoted as

, is outputted with the same shape as the inputs.

A decoder is connected after the encoder to map back to diagnosis feature space. There are two inputs for the decoder, diagnosis embedding and original diagnosis feature . Being symmetrical to the encoder, decoder is implemented by the multi-head attention with diagnosis feature as and diagnosis embedding as and , which can be formulated as:

(2)

where are positional encodings for diagnosis feature and diagnosis embedding respectively. The decoder transfers to a diagnosis feature by enhancing its affinity with . A self-attention block is connected after the decoder to refine the representations. The obtained sequence will be reshaped back as a vitalized diagnosis feature with the same shape as .

3 Experiment

3.1 Diagnosis Tasks

We evaluate SeATrans on three different disease diagnosis tasks: glaucoma diagnosis, thyroid cancer diagnosis and melanoma diagnosis. Glaucoma is predicted from fundus images and is assisted by OD/OC segmentation. Thyroid cancer is predicted from ultrasound images and is assisted by the thyroid nodule segmentation. Melanoma is predicted from dermoscopic images and is assisted by skin lesions segmentation. The experiments of glaucoma, thyroid cancer and melanoma diagnosis are conducted on REFUGE-2 dataset [8], TNMIX dataset [13, 27] and ISIC dataset [15], which contain 1200, 8046, 1600 samples, respectively. The datasets are publicly available with both segmentation and diagnosis labels. Train/validation/test sets are split following the default settings of the dataset.

3.2 Experimental Settings

In our experiments, the main framework utilizes the UNet [24] architecture as the segmentation model and ResNet50 [16]

as the diagnosis model. The segmentation network is pre-trained on heterologous data distribution. All the experiments are implemented with the PyTorch platform and trained/tested on 4 Tesla P40 GPU with 24GB of memory. All images are uniformly resized to the dimension of 256

256 pixels. The networks are trained in an end-to-end manner using Adam optimizer with a mini-batch of 16 for 80 epochs. The learning rate is initially set to 1

. The detailed configurations can be found in the code.

To verify the effectiveness of SeATrans, we compare it with several baselines. The vanilla baseline is a standard classification model implemented by ResNet50 with no segmentation mask provided. Three other baselines are implemented by commonly used segmentation-assisted diagnosis techniques [2]

, which are denoted as ’Base-cat’, ’Base-multi’, and ’Base-ROI’, respectively. ’Base-cat’ concatenates the estimated masks with the raw images as the input of the diagnosis model. ’Base-multi’ learns a single network for both segmentation and diagnosis. ’Base-ROI’ crops the region of interest (ROI) based on the estimated segmentation masks.

In order to verify the generalization of the models, we train segmentation network on homologous (-homo) and heterologous (-hetero) data, respectively. ’-homo’ means segmentation and diagnosis network are trained on the same source of data. ’-hetero’ means segmentation model is trained on an external dataset, which is RIGA [1], DDIT [23] and PH2 [21] for glaucoma, thyroid cancer and melanoma diagnosis, respectively.

3.3 Main Results

Comparing SeATrans with baselines in Table 1, we can see significant improvement on all three diagnosis tasks. Concretely, comparing with the best baseline by AUC, SeATrans improves 6.56%, 6.78% and 8.14% on glaucoma, thyroid cancer and melanoma diagnosis respectively, indicating SeATrans can gain general and considerable improvement comparing with the present commonly used techniques. SeATrans also achieves the highest sensitivity with competitive accuracy and specificity, indicating it is more applicable to the real clinical scenarios, since sensitivity is commonly of great concern in clinical scenes.

Comparing vanilla baseline with the other methods, we can see except ’Base-multi’, the segmentation more or less improves the diagnosis performance. It demonstrates the segmentation information of lesions/tissues is definitely useful for the automated diagnosis models. However, the improvement it can bring depends largely on the way we use it. Multi-task learning based methods seemed to be invalid according to our experimental results. This may be due to the large discrepancy between segmentation and diagnosis features. The segmentation encoder extract the low-level structure features while the diagnosis needs the high-level semantic features, it is thus hard to learn the universal features in one encoder. SeATrans fuses the multi-scale segmentation features to first few layers of the diagnosis model. In this way, these structure-focused layers are enhanced by the awareness of lesions/tissues structures, and the later layers can still abstract the high-level diagnosis feature. As a result, SeATrans outperforms the other segmentation-assisted diagnosis methods by a large margin.

Tasks Glaucoma Thyroid Cancer Melanoma
Metrics ACC SPE SEN AUC Dice ACC SPE SEN AUC Dice ACC SPE SEN AUC
No Mask Baseline - - 82.95 94.06 37.97 77.29 - 79.29 93.75 68.62 77.08 - 78.53 92.23 22.22 72.49
-homo Base-cat 94.73 81.77 78.95 82.18 65.82 81.91 86.76 81.74 95.42 70.76 80.06 82.35 77.72 87.84 36.11 76.42
Base-multi 94.73 81.77 82.20 92.50 40.50 74.73 86.76 77.25 84.58 65.75 72.37 82.35 80.98 95.27 22.22 71.72
Base-ROI 94.73 81.77 75.18 77.18 67.08 79.88 86.76 83.25 90.26 74.92 77.50 82.35 79.35 90.88 31.94 72.70
SeATrans 94.73 81.77 86.96 90.93 70.88 88.47 86.76 85.54 91.75 78.84 86.84 82.35 85.56 93.77 62.74 84.56
-hetero Base-cat 94.60 78.31 82.95 92.19 50.63 80.70 85.17 82.13 88.93 73.58 80.15 80.07 79.89 92.91 26.39 74.41
Base-multi 94.60 78.31 76.19 90.31 55.70 72.40 85.17 81.35 94.81 67.79 72.21 80.07 80.98 95.27 22.22 68.70
Base-ROI 94.60 78.31 83.20 94.06 39.24 77.52 85.17 82.39 87.74 78.65 76.73 80.07 72.28 77.03 52.78 72.22
SeATrans 94.60 78.31 80.20 80.62 78.48 87.61 85.17 84.60 90.26 80.45 86.23 80.07 84.42 87.20 57.35 83.16
Table 1: Comparing with the baselines. Accuracy, specificity, sensitivity and AUC (%) are measured on three different diagnosis tasks. Segmentation model performance measured by Dice score (%) is also reported.

To verify the generalization of the methods, we also conduct the experiment on heterologous data, where the segmentation model is pre-trained on external dataset. Due to the domain shift, the segmentation masks/features would be inferior to ’-homo’, thus disturb the diagnosis models. Comparing ’-homo’ with ’-hetero’, we can see a drop on the AUC performance over all of the methods. But SeATrans shows very competitive generalization ability, dropping only about 1% AUC on ’-hetero’.

3.4 Comparing with SOTA

To demonstrate the advantage of SeATrans, we compare it with SOTA methods for segmentation-assisted diagnosis. Table 2 quantitatively compare SeATrans with nine SOTA segmentation-assisted diagnosis methods.

SeATrans vs Transformers. Present SOTA transformer-based diagnosis architectures: ConViT [6] and Swin Transformer [20] are involved for the comparison. Segmentation masks are concatenated as the inputs of the models. It shows SeATrans clearly outperforms these transformer architectures, increases about 5.60%, 5.82% and 7.10% AUC on glaucoma, thyroid and melanoma, respectively. It demonstrates a large proportion of the improvement comes from the proposed feature fusion strategy, but not the transformer-like architecture.

SeATrans vs ROI. We compare SeATrans with ROI based segmentation-assisted diagnosis methods: DualStage [4] and DENet [10]. It shows [4] only gains marginal improvement compared with vanilla baseline. Although [10] achieves better performance, it is only applicable on glaucoma diagnosis. SeATrans outperforms ROI based methods by an average 4% AUC on a range of tasks.

SeATrans vs Channel Attention. We also compare SeATrans with SOTA channel attention based segmentation-assisted diagnosis methods: AGCNN [19] and ColNet [36], who adopted channel-attention to enhance the diagnosis feature by the segmentation masks/features. We observe that SeATrans can surpass AGCNN and ColNet by 6.31% and 3.10% AUC on glaucoma, 3.99% and 2.40% on thyroid cancer,and 4.39% and 3.84% on melanoma diagnosis, indicating the superiority of SeATrans comparing with regional-correlated channel attention.

SeATrans vs Multi-task. Multi-task learning methods MagNet [14] and CMSNET [37] are involved for the comparison. SeATrans consistently outperforms both methods, especially on thyroid cancer diagnosis, which outperforms MagNet and CMSVNET by 11.16% and 10.13% AUC respectively.

SeATrans vs Transfer-learning. L2T-KT [31] uniquely processed the task by teacher-student based transfer learning and achieved competitive performance. Comparing the AUC, SeATrans outperforms L2T-KT by 2.23%, 2.55% and 2.66% on glaucoma, thyroid cancer and melanoma diagnosis, respectively. SeATrans also achieves better sensitivity-speficity trade-off than L2T-KT. For example, SeATrans achieves 79.66% F1 score which surpasses 77.43% F1 score of L2T-KT on glaucoma diagnosis.

Heterologous data Generalization. Comparing with ’-homo’ and ’-hetero’, we can see ROI-based methods (Dual-stage, DENet) show the best generalization, since they used less segmentation information than the others. SeATrans and Transformer-based methods (ConViT, Swin) also show competitive generalization capability, which drop only about 1% AUC on a range of tasks. Channel-attention based methods (AGCNN, ColNet) are more sensitive since their regional correlated assumption is vulnerable to the domain shift. Thanks to the dynamic and global nature of SeATrans, it gains high performance with very competitive generalization ability comparing with the other methods.

Glaucoma Thyroid Cancer Melanoma
ACC SPE SEN AUC ACC SPE SEN AUC ACC SPE SEN AUC
-homo ConViT [6] 80.45 86.56 55.69 82.87 80.85 90.31 64.67 81.02 79.89 90.87 34.72 77.46
Swin [20] 81.95 91.56 43.03 82.32 82.76 85.70 73.82 80.34 80.76 89.11 47.29 76.75
DualStage [4] 80.20 90.31 39.24 80.37 79.56 85.64 70.93 77.15 78.26 87.84 38.89 72.34
DENet [10] 80.04 85.00 59.49 84.70 - - - - - - - -
AGCNN [19] 81.20 89.68 41.77 82.16 84.78 88.69 71.05 82.85 82.60 92.20 43.83 80.17
ColNet [36] 79.69 79.69 79.74 85.36 87.60 94.08 72.47 84.43 85.21 98.31 30.98 80.72
MagNet [14] 83.20 94.06 39.24 77.52 78.91 86.71 69.25 75.68 75.54 81.08 52.77 71.77
CMSNET [37] 64.16 73.41 61.85 80.86 74.52 87.03 67.32 76.71 82.88 98.32 17.14 78.55
L2T-KT [31] 80.20 80.62 78.48 86.24 81.49 90.31 75.59 84.29 79.34 84.69 58.10 81.90
SeATrans 86.96 90.93 70.88 88.47 85.54 91.75 78.84 86.84 85.59 93.77 62.74 84.56
-hetero ConViT [6] 80.45 91.56 35.44 82.37 77.46 93.75 60.11 81.02 79.34 85.42 54.79 76.55
Swin [20] 72.18 83.54 69.37 81.85 80.34 83.45 76.18 80.34 79.89 87.11 50.68 75.28
DualStage [4] 75.18 67.08 77.18 80.22 80.70 88.69 75.71 77.15 81.79 92.56 37.50 72.63
DENet [10] 78.94 84.81 77.50 84.12 - - - - - - - -
AGCNN [19] 66.16 62.81 79.74 80.94 82.29 93.41 70.58 82.85 76.08 78.98 64.38 77.38
ColNet [36] 61.40 97.46 52.50 82.78 84.93 91.19 73.82 84.43 82.33 93.91 34.72 77.95
MagNet [14] 70.42 70.31 70.88 75.44 78.15 87.74 77.36 75.68 79.34 90.50 33.33 69.67
CMSNET [37] 60.15 77.21 55.93 78.17 81.75 84.60 78.14 76.71 72.82 73.73 69.01 76.39
L2T-KT [31] 79.95 85.00 59.49 84.98 83.89 94.04 76.18 84.29 82.06 88.51 55.56 80.75
SeATrans 80.70 80.62 78.48 87.61 84.60 90.26 80.45 86.23 84.42 87.20 57.35 83.16
Table 2: Comparing with SOTA segmentation-assisted diagnosis methods. Accuracy, specificity, sensitivity and AUC (%) are measured on three different diagnosis tasks.

3.5 Ablation study

Ablation studies are performed over each component of SeATrans, including multi-scale, asymmetric interaction and SeA-Block, as listed in Table 3. The experiments are conducted on glaucoma diagnosis task. Feature concatenation is adopted to replace SeA-block. In Table 3, as we sequentially adding the proposed modules on vanilla baseline, the model performance is gradually improved. First, by applying multi-scale segmentation-diagnosis integration, the AUC value is increased by 2% on homologous data while only 0.6% on heterologous data. This indicates that multi-scale integration can improve the diagnosis performance with limited generalization. Then, the asymmetric multi-scale interaction is applied to further focus the integration on the low-level features, which boosts the AUC by a 3.53 % and a 3.42% on ’-homo’ and ’-hetero’ respectively. Finally, SeA-Block is utilized for the segmentation-diagnosis interaction. It can be observed the diagnosis performance is remarkably improved, which gains 5.09% and 6.34% AUC improvement on ’-homo’ and ’-hetero’, respectively. It indicates SeA-block gains significant and general improvement by its dynamic and global interaction.

Multi-scale Asymmetric SeA-block -homo -hetero
77.29 77.29
79.85 77.84
83.38 81.27
88.47 87.61
Table 3: Ablation study on glaucoma diagnosis task. The diagnosis performance is measured by AUC (%)

4 Conclusion

In this work, we proposed SeATrans to overcome the shortcomings of existing segmentation-assisted diagnosis models. In SeATrans, asymmetric multi-scale interaction is proposed to address the segmentation-diagnosis scale level discrepancy. Then SeA-block is constructed for the global and dynamic feature interaction between segmentation and diagnosis space. Extensive empirical experiments demonstrated the general and superior performance of the proposed SeATrans on a range of medical image diagnosis tasks.

References

  • [1] A. Almazroa, S. Alodhayb, E. Osman, E. Ramadan, M. Hummadi, M. Dlaim, M. Alkatee, K. Raahemifar, and V. Lakshminarayanan (2017) Agreement among ophthalmologists in marking the optic disc and optic cup in fundus images. International ophthalmology 37 (3), pp. 701–717. Cited by: §3.2.
  • [2] S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan (2018)

    Medical image analysis using convolutional neural networks: a review

    .
    Journal of medical systems 42 (11), pp. 1–13. Cited by: §3.2.
  • [3] J. L. Ba, J. R. Kiros, and G. E. Hinton (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §2.2.
  • [4] M. N. Bajwa, M. I. Malik, S. A. Siddiqui, A. Dengel, F. Shafait, W. Neumeier, and S. Ahmed (2019) Two-stage framework for optic disc localization and glaucoma classification in retinal fundus images using deep learning. BMC medical informatics and decision making 19 (1), pp. 1–16. Cited by: §1, §3.4, Table 2.
  • [5] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko (2020) End-to-end object detection with transformers. In

    European Conference on Computer Vision

    ,
    pp. 213–229. Cited by: §2.2.
  • [6] S. d’Ascoli, H. Touvron, M. Leavitt, A. Morcos, G. Biroli, and L. Sagun (2021) ConViT: improving vision transformers with soft convolutional inductive biases. arXiv preprint arXiv:2103.10697. Cited by: §3.4, Table 2.
  • [7] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. (2020) An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Cited by: §1.
  • [8] H. Fang, F. Li, H. Fu, X. Sun, X. Cao, J. Son, S. Yu, M. Zhang, C. Yuan, C. Bian, et al. (2022) REFUGE2 challenge: treasure for multi-domain learning in glaucoma assessment. arXiv preprint arXiv:2202.08994. Cited by: §3.1.
  • [9] H. Fu, J. Cheng, Y. Xu, D. W. K. Wong, J. Liu, and X. Cao (2018) Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE transactions on medical imaging 37 (7), pp. 1597–1605. Cited by: §1.
  • [10] H. Fu, J. Cheng, Y. Xu, C. Zhang, D. W. K. Wong, J. Liu, and X. Cao (2018) Disc-aware ensemble network for glaucoma screening from fundus image. IEEE transactions on medical imaging 37 (11), pp. 2493–2501. Cited by: §1, §3.4, Table 2.
  • [11] J. Gachon, P. Beaulieu, J. F. Sei, J. Gouvernet, J. P. Claudel, M. Lemaitre, M. A. Richard, and J. J. Grob (2005) First prospective study of the recognition process of melanoma in dermatological practice. Archives of dermatology 141 (4), pp. 434–438. Cited by: §1.
  • [12] D. F. Garway-Heath, S. T. Ruben, A. Viswanathan, and R. A. Hitchings (1998) Vertical cup/disc ratio in relation to optic disc size: its value in the assessment of the glaucoma suspect. British Journal of Ophthalmology 82 (10), pp. 1118–1124. Cited by: §1.
  • [13] H. Gong, G. Chen, R. Wang, X. Xie, M. Mao, Y. Yu, F. Chen, and G. Li (2021) Multi-task learning for thyroid nodule segmentation with thyroid region prior. In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 257–261. Cited by: §3.1.
  • [14] S. Gupta, N. S. Punn, S. K. Sonbhadra, and S. Agarwal (2021) MAG-net: multi-task attention guided network for brain tumor segmentation and classification. In International Conference on Big Data Analytics, pp. 3–15. Cited by: §3.4, Table 2.
  • [15] D. Gutman, N. C. Codella, E. Celebi, B. Helba, M. Marchetti, N. Mishra, and A. Halpern (2016) Skin lesion analysis toward melanoma detection: a challenge at the international symposium on biomedical imaging (isbi) 2016, hosted by the international skin imaging collaboration (isic). arXiv preprint arXiv:1605.01397. Cited by: §3.1.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 770–778. Cited by: §2.1, §3.2.
  • [17] A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry (2019) Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175. Cited by: §5.
  • [18] W. Ji, S. Yu, J. Wu, K. Ma, C. Bian, Q. Bi, J. Li, H. Liu, L. Cheng, and Y. Zheng (2021) Learning calibrated medical image segmentation via multi-rater agreement modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12341–12351. Cited by: §1.
  • [19] L. Li, M. Xu, X. Wang, L. Jiang, and H. Liu (2019) Attention based glaucoma detection: a large-scale database and cnn model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10571–10580. Cited by: §1, §3.4, Table 2.
  • [20] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo (2021) Swin transformer: hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022. Cited by: §3.4, Table 2.
  • [21] T. Mendonça, P. M. Ferreira, J. S. Marques, A. R. Marcal, and J. Rozeira (2013) PH 2-a dermoscopic image database for research and benchmarking. In 2013 35th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 5437–5440. Cited by: §3.2.
  • [22] M. M. Naseer, K. Ranasinghe, S. H. Khan, M. Hayat, F. Shahbaz Khan, and M. Yang (2021) Intriguing properties of vision transformers. Advances in Neural Information Processing Systems 34. Cited by: §1.
  • [23] L. Pedraza, C. Vargas, F. Narváez, O. Durán, E. Muñoz, and E. Romero (2015) An open access thyroid ultrasound image database. In 10th International Symposium on Medical Information Processing and Analysis, Vol. 9287, pp. 92870W. Cited by: §3.2.
  • [24] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §3.2.
  • [25] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626. Cited by: §5.
  • [26] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang (2016)

    Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network

    .
    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874–1883. Cited by: §2.1.
  • [27] M. Shusharina, R. Heinrich, and Huang (2020) Segmentation, classification, and registration of multi-modality medical imaging data. Cited by: §3.1.
  • [28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §2.2.
  • [29] Z. Wojna, V. Ferrari, S. Guadarrama, N. Silberman, L. Chen, A. Fathi, and J. Uijlings (2017) The devil is in the decoder. In British Machine Vision Conference 2017, BMVC 2017, pp. 1–13. Cited by: §2.1.
  • [30] J. Wu, H. Fang, F. Li, H. Fu, F. Lin, J. Li, L. Huang, Q. Yu, S. Song, X. Xu, et al. (2022) Gamma challenge: glaucoma grading from multi-modality images. arXiv preprint arXiv:2202.06511. Cited by: §1.
  • [31] J. Wu, S. Yu, W. Chen, K. Ma, R. Fu, H. Liu, X. Di, and Y. Zheng (2020) Leveraging undiagnosed data for glaucoma classification with teacher-student learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 731–740. Cited by: §1, §3.4, Table 2.
  • [32] J. Wu (2019) Generating adversarial examples in the harsh conditions. arXiv preprint arXiv:1908.11332. Cited by: §5.
  • [33] Y. Yang, S. Fangxin, W. Binghong, Y. Dalu, W. Lei, Y. Xu, W. Zhang, and T. Zhang (2021) Robust collaborative learning of patch-level and image-level annotations for diabetic retinopathy grading from fundus image.. IEEE Transactions on Cybernetics, pp. 1–11. Cited by: §1.
  • [34] Y. Yuan, M. Chao, and Y. Lo (2017) Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance. IEEE transactions on medical imaging 36 (9), pp. 1876–1886. Cited by: §1.
  • [35] H. Zhang, Y. Yu, J. Jiao, E. Xing, L. El Ghaoui, and M. Jordan (2019) Theoretically principled trade-off between robustness and accuracy. In

    International conference on machine learning

    ,
    pp. 7472–7482. Cited by: §5.
  • [36] Y. Zhou, X. He, L. Huang, L. Liu, F. Zhu, S. Cui, and L. Shao (2019) Collaborative learning of semi-supervised segmentation and classification for medical images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2079–2088. Cited by: §1, §3.4, Table 2.
  • [37] Y. Zhou, H. Chen, Y. Li, Q. Liu, X. Xu, S. Wang, P. Yap, and D. Shen (2021) Multi-task learning for segmentation and classification of tumors in 3d automated breast ultrasound images. Medical Image Analysis 70, pp. 101918. Cited by: §3.4, Table 2.

5 Supplementary material

In order to further analysis the interrelation of the segmentation and the diagnosis. We adopt the network explanation techniques on the models to visualize the discriminative features. Grad-CAM[25] is a commonly used explanation tool that produces visual explanations for model decisions. It visualize the gradients of the loss function with pixel-wise weighted feature maps. We compare the Grad-CAM produced visualization results on an glaucoma diagnosis example in Fig. 2.

Figure 2: Visualization results compared with the other methods on fundus images based glaucoma diagnosis. Grad-CAM is adopted to show the attentive regions for the diagnosis.

We can see ROI based methods (DENet and DualStage) and Transformer based methods (Swin and ConViT) show less attention on the clinical focused region, like optic cup. It may because these methods impose the segmentation enhancement on the model inputs rather than the deep features. Although the explanation is not so good, some of these models with sophisticated network structures still achieve fine diagnosis performance, like Swin, ConViT and DENet. Some of the recent literature also show that the sophisticated networks will show stronger capability but inferior explanation

[35, 32, 17], since they are prone to learn some features that discriminative to the networks while meaningless to the human[17]. Multi-task based methods (MagNet and CMSNET) and channel attention based methods (AGCNN and ColNet) mainly focus on the optic-cup region, which is important for the clinical glaucoma diagnosis. But most of them are not implemented with sufficient learnable parameters, which cause they show inferior diagnosis performance. Transfer-learning based method (L2T-KT) and proposed SeATrans pay more attention on the optic-cup region. Besides optic-cup region, SeATrans also focuses on the gap between OC and OD boundary, which is another important parameter indicating glaucoma suspect clinically. Such visualization results demonstrate SeATrans can reach superior diagnosis performance with clear and reasonable explanation.