Deep PET/CT fusion with Dempster-Shafer theory for lymphoma segmentation

Lymphoma detection and segmentation from whole-body Positron Emission Tomography/Computed Tomography (PET/CT) volumes are crucial for surgical indication and radiotherapy. Designing automatic segmentation methods capable of effectively exploiting the information from PET and CT as well as resolving their uncertainty remain a challenge. In this paper, we propose an lymphoma segmentation model using an UNet with an evidential PET/CT fusion layer. Single-modality volumes are trained separately to get initial segmentation maps and an evidential fusion layer is proposed to fuse the two pieces of evidence using Dempster-Shafer theory (DST). Moreover, a multi-task loss function is proposed: in addition to the use of the Dice loss for PET and CT segmentation, a loss function based on the concordance between the two segmentation is added to constrain the final segmentation. We evaluate our proposal on a database of polycentric PET/CT volumes of patients treated for lymphoma, delineated by the experts. Our method get accurate segmentation results with Dice score of 0.726, without any user interaction. Quantitative results show that our method is superior to the state-of-the-art methods.


page 2

page 7

page 8


Lymphoma segmentation from 3D PET-CT images using a deep evidential network

An automatic evidential segmentation method based on Dempster-Shafer the...

CT organ segmentation using GPU data augmentation, unsupervised labels and IOU loss

Fully-convolutional neural networks have achieved superior performance i...

Abdominal multi-organ segmentation with organ-attention networks and statistical fusion

Accurate and robust segmentation of abdominal organs on CT is essential ...

Improving Vertebra Segmentation through Joint Vertebra-Rib Atlases

Accurate spine segmentation allows for improved identification and quant...

RibSeg Dataset and Strong Point Cloud Baselines for Rib Segmentation from CT Scans

Manual rib inspections in computed tomography (CT) scans are clinically ...

Multi-Atlas Segmentation with Joint Label Fusion of Osteoporotic Vertebral Compression Fractures on CT

The precise and accurate segmentation of the vertebral column is essenti...

Regression Forest-Based Atlas Localization and Direction Specific Atlas Generation for Pancreas Segmentation

This paper proposes a fully automated atlas-based pancreas segmentation ...

1 Introduction

In the clinical diagnosis and radiotherapy planning of lymphoma, PET/CT scanning is an effective imaging tool for tumor segmentation. In PET volumes, the standardized uptake value (SUV) is widely used to locate and segment lymphomas because of its high sensitivity and specificity [11]. Moreover, CT is usually used in combination with PET because of its good representation of anatomical features.

Considering the multiplicity of lymphoma sites (sometimes more than 100) and the wide variation in distribution, shape, type and number of lymphomas, whole-body PET/CT lymphoma segmentation is still challenging although many segmentation methods have been proposed (see a lymphoma patient in Fig. 1

). Computer-aided methods for lymphoma segmentation can be classified into three main categories: SUV-threshold-based, region-growing-based and Convolutional-Neural-Network (CNN)-based

[7] methods.

Figure 1: Examples of PET and CT slices with lymphomas. The first and second row show PET and CT slices of one patient in axial, sagittal and coronal views, respectively.

For PET volumes, it is common to use fixed SUV thresholds to segment lymphoma. This kind of method is fast but it lacks flexibility in boundary delineation and requires clinicians to locate the region of interest. Region-growing-based methods have been proposed to overcome the limitation of boundary delineation of SUV-based methods by taking texture and shape information into account. The common idea is to set the volumes with initializing seeds automatically for segmenting lymphomas [8][4]. However, these methods still require to region of interest manually and are time-consuming.

In recent years, CNNs have achieved great success in computer vision tasks

[5][2], as well as in the medical domain. The Fully Connected Network (FCN) model [10]

is the first fully convolutional neural network that could be trained end-to-end for pixel-wise classification. UNet

[13], a successful modification and extension of FCN, has become the most popular network for medical image segmentation. Based on UNet, many extensions and optimizations have been proposed, such as Deep3D-UNet [18], attention-UNet [12], etc. In [6], Hu et al. propose a Conv3D-based multi-view PET image fusion strategy for lymphoma segmentation. In this approach, 2D and 3D segmentation are performed separately and the results are fused by a Conv3D layer. In [7], Li et al. propose a whole-body PET/CT lymphoma segmentation method that fuses CT and PET slices by concatenating them before training; the method is based on a two-flow architecture (segmentation flow and reconstruction flow). Using a similar approach, Blanc-Durand et al. propose a CNN-based segmentation network for diffuse large B cell lymphoma segmentation by concatenating PET and CT as two channel inputs [1].

It should be noted that the effective fusion of multi-modality information is of great importance in the medical domain. A single-modality image often does not contain enough information and is often tainted with uncertainty. This is why physicians always use PET/CT volumes together for lymphoma segmentation and radiotherapy. Using CNNs, researchers have mainly adopted probabilistic approaches to data fusion, which can be classified into three strategies [17]: image-level, feature-level and decision-level fusion. However, probabilistic fusion is unable to effectively manage conflicts that occur when the same voxel is labeled with two different labels by CT and TEP. Dempster-Shafer theory (DST) [3][14]

, also known as belief function theory or evidence theory, is a formal framework for information modeling, evidence combination and decision-making with uncertain or imprecise information. Despite the low resolution and contrast of medical images, DST’s high ability to describe uncertainty allows us to represent evidence more faithfully than using probabilistic approaches. Researchers from the medical image community have started to actively investigate the use of DST for handling uncertain, imprecision sources of information in different medical tasks, such as medical image retrieval

[15], lesion segmentation [9], etc.

In this work, we propose a DST-based PET/CT image fusion model for 3D lymphoma segmentation. To our knowledge, this is the first multi-modality PET/CT volume fusion method using a CNN and DST. The main contributions of this work are (1) a CNN architecture with a DST-based fusion layer that effectively handles uncertainty and conflict when combining PET and CT information; (2) a multi-task loss function making it possible to optimize different segmentation tasks and to increase segmentation accuracy; (3) a 3D segmentation model with end-to-end training for whole-body lymphoma segmentation.

2 Methods

2.1 Network Architecture

Fig. 2 shows the workflow of the multi-modality fusion-based lymphoma segmentation framework. It is composed of two modified encoder-decoder (Unet) modules and an evidential fusion layer. To reduce computation cost, we reduce the number of convolution filters of Unet from to

to get a “slim UNet”. A fusion layer is constructed based on DST, which will be explained in Section 2.3. Two modality images: PET and CT are taken as inputs to our framework. We first feed the prepossessed PET volume into UNet1 and the prepossessed CT volume into UNet2 to independently compute their segmentation probability maps

and . These two 3D maps are then transferred to the fusion layer, which computes an evidential segmentation map . For training, a multi-task loss function is proposed to minimize the Dice loss and mean square loss between three segmentation maps and masks, as explained in Section 2.4.

Figure 2: The global segmentation framework.

2.2 Evidential Fusion Strategy

Let be a finite set of possible answers to some question, one and only one of which is true. Evidence about that question can be represented by a mapping from to , called a mass function, such that . For any hypothesis , the quantity represents a share of a unit mass of belief allocated to the hypothesis that the truth is in , and which cannot be allocated to any strict subset of based on the available evidence. Two mass functions and representing two independent items of evidence can be combined by Dempster’s rule [14] defined as


for all , and . In Eq. (1), represents the degree conflict between and defined as


If all focal sets of are singletons and

is said to be Bayesian, it is equivalent to a probability distribution. In our case, the lymphoma segmentation task can be considered as a two-class classification task; the set of possibilities is

where 1 and 0 stand, respectively, for presence and absence of lymphoma in a given voxel. Two Bayesian mass functions and can be obtained from two probability segmentation maps maps and . They are aggregated by Dempster’s rule (1).

The two segmentation results obtained by PET and CT cannot easily classify boundary voxels due to low signal-to-noise ratio and low contrast, resulting in segmentation uncertainty. Moreover, when

and assign the same voxel to different classes, there is a high conflict between the two segmentation results. It is difficult to solve these problems by classical fusion methods. In this work, the proposed evidential fusion layer allows us to solve these problems thanks to Dempster’s rule. To be specific, for a given voxel, if gives a probability of 0.26 that it belongs to lymphoma, and gives a probability of 0.85 that it belongs to lymphoma, the two segmentation results are contradictory and it is unreasonable to simply fuse them by linear combination or majority voting. In our evidential fusion layer, we can reassign the probability that the voxel belongs to lymphoma as with Eqs. (1)-(2). This rule takes the conflict into consideration and yields more reliable segmentation results.

2.3 Multi-task loss function

Lymphomas segmentation task show sub-optimal performance in CT, which may leads to misleading fusion results in our model. In order to avoid the problem, a multi-task loss function here is proposed to exploit all the available information during training and increase segmentation accuracy. Since lymphomas are visible on both PET and CT, even though they do not have exactly the same shape in these two volumes, they should overlap as much as possible. Here we set PET mask as ground truth for both PET and CT to constrains the overlap rate between PET and CT.

As shown in Fig. 1, we define three loss functions: , and measuring the discrepancy between ground truth and, respectively, CT segmentation maps, PET segmentation maps, and the final segmentation output. For and , we use Dice loss,


where is the number of voxels of segmentation outputs, and are the PET and CT segmentation outputs and is the ground truth of lymphoma in PET. For , we use mean square loss,


where is the final segmentation output. The multi-task loss function is defined as


Here we set the weight of as 0.75 and the weight of as 0.25 to enable the model learn more from hard example (CT).

2.4 Implementation Details

All methods were implemented in Python using Tensorflow framework and were trained and tested on a desktop with a 2.20GHz Intel(R) Xeon(R) CPU E5-2698 v4 and a Tesla V100-SXM2 graphics card with 32 GB GPU memory.

3 Experiments and Analysis

3.1 Dataset and Preprocessing

The experimental dataset consists of 173 labeled cases of real PET/CT volumes, whose labels indicate the ground truth of lymphoma. All PET/CT data were acquired from the Henri Becquerel hospital. The study was approved as a retrospective study by the Henri Becquerel Center Institutional Review Board. All patients’ information were de-identified and anonymized prior to analysis. The size of the CT volumes and the corresponding masks vary from to and their spatial resolution varied from to . The size of the PET volumes and the corresponding masks vary from to and their spatial resolution varied from to . For preprossessing, we first registered the CT volumes and matched them with PET volumes. Then we resized PET, CT and mask volumes into size

. Both PET and CT volumes were normalized into the standard distribution by the linear standardization method. Two kinds of data augmentation were applied here to enrich the training data: deformation and affine transformation. We used 80% of the data for training, 10% for validation and 10% for testing. The Dice score, Precision and Recall were used to evaluate the segmentation results of lymphoma at the voxel level.

3.2 Results and Discussion

Figure 3: Comparison of segmentation results of CT, PET, PET/CT concatenation [13], PET/CT concatenation with [1] and PET/CT evidential fusion. The ground truth is marked in white and segmentation results are marked in red.

The quantitative results of the experiments are represented in Table 1. UNet is the baseline method. We first tested the segmentation performance when using only PET or CT volume. Then we concatenate PET and CT volumes as inputs of the UNet. This is an input-level fusion method. Finally, we used two independent UNets for PET and CT, respectively. The fusion was carried out at decision level by the proposed evidential fusion layer. Compared with mono-modality input, our proposal shows an obvious advantage. Compared with the concatenate-based fusion method, our proposal has 3%, 4%, 6% increase in Dice score, Precision and Recall, respectively. Fig. 3 shows the comparison results between the above four input modalities. We can see from the figure that mono-modality input does not allow us to segment small regions, especially for CT. The concatenation of PET and CT solves this problem to some extent [1][13], but the result is still not ideal. The evidential fusion of PET and CT achieves better segmentation result, especially for the small lymphomas located at the top of the images.

Models Input Modality Dice score Precision Recall
UNet (mono-input) PET 0.670.02 0.740.05 0.65 0.03
UNet (mono-input) CT 0.49 0.05 0.71 0.02 0.45 0.02
UNet (concatenate-based fusion) PET+CT 0.69 0.01 0.67 0.01 0.74 0.05
EUnet (evidential fusion) PET+CT 0.72 0.04 0.71 0.06 0.80 0.05
Zeng et al. [16] PET+CT 0.68±0.02 0.68±0.01 0.68±0.01
Hu et al. [6] PET+CT 0.66±0.03 0.71±0.044 0.67±0.03
Blanc et al.[1] PET+CT 0.73±0.20 0.75±0.22 0.83±0.17
Table 1: Performance comparison with the baseline methods on the test set.

Since there is no public lymphoma dataset available now, the quantitative comparison with other state-of-the-art methods is difficult, because different datasets and different lymphoma types are segmented. However, we still compare our method with the state-of-the-art in Table 1. In [6], Hu et al. achieve 0.66±0.03 Dice score on a dataset of 109 lymphoma patients with a convolution fusion with 2D slices and 3D volumes. In [1], Blanc-Durand et al. report 0.73±0.20 for Dice score based on the training on 511 lymphoma patients. Though the authors report comparable results with ours, the performance of our model is more stable and needs less training data. We also test the method of [1] with our dataset and achieve 0.64±0.02 in Dice score, where the priory of our proposal is obvious. A qualitative comparison with our model is shown in Fig. 3. See the fifth image from Fig. 3, their model show sign of slight overfitting but is visually acceptable and our model show visually correct segmentation results. Generally speaking, our model yields better results than state-of-the-art methods with DST-based evidential fusion.

A qualitative comparison is shown in Fig. 4. Here we overlap PET and CT in the same image and paste the segmented mask in this image. Column 1 shows one slice of patient 1 in which a big lymphoma is present. Our model can segment it correctly. Column 2 shows a slice of patient 2 in which multiple lymphomas are present. They are more difficult to segment because some of them are very small. However, our model can still obtain satisfactory results.

Figure 4: Comparison results between two different patients by using our EUNet model. From left to right, each column represents the corresponding segmentation results and the ground truth. The lymphomas are showed in white.

4 Conclusion

In this work, a DST-based deep multi-modality medical images fusion strategy has been proposed to deal with the problem of uncertainty and conflict within a deep convolutional neural network. A two-branch segmentation module first processes PET/CT volumes separately. Then the two segmentation maps are fused by the evidential fusion layer. Qualitative and quantitative evaluation results are promising as compared to the baseline and state-of-the-art methods. Future research will aim at improving the network architecture to better exploit the potential of DST to represent and combine uncertain information.


  • [1] P. Blanc-Durand, S. Jégou, S. Kanoun, A. Berriolo-Riedinger, C. Bodet-Milin, F. Kraeber-Bodéré, T. Carlier, S. Le Gouill, R. Casasnovas, M. Meignan, et al. (2020) Fully automatic segmentation of diffuse large b cell lymphoma lesions on 3d fdg-pet/ct for total metabolic tumour volume prediction using a convolutional neural network.. European Journal of Nuclear Medicine and Molecular Imaging, pp. 1–9. Cited by: §1, Figure 3, §3.2, §3.2, Table 1.
  • [2] A. Bochkovskiy, C. Wang, and H. M. Liao (2020) Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934. Cited by: §1.
  • [3] A. P. Dempster (1967) Upper and lower probability inferences based on a sample from a finite univariate population. Biometrika 54 (3-4), pp. 515–528. Cited by: §1.
  • [4] P. Desbordes, C. Petitjean, and S. Ruan (2014) 3D automated lymphoma segmentation in pet images based on cellular automata. In 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6. Cited by: §1.
  • [5] S. Hershey, S. Chaudhuri, D. P. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, M. Plakal, D. Platt, R. A. Saurous, B. Seybold, et al. (2017) CNN architectures for large-scale audio classification. In 2017 ieee international conference on acoustics, speech and signal processing (icassp), pp. 131–135. Cited by: §1.
  • [6] H. Hu, L. Shen, T. Zhou, P. Decazes, P. Vera, and S. Ruan (2020) Lymphoma segmentation in pet images based on multi-view and conv3d fusion strategy. In 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1197–1200. Cited by: §1, §3.2, Table 1.
  • [7] H. Li, H. Jiang, S. Li, M. Wang, Z. Wang, G. Lu, J. Guo, and Y. Wang (2019) DenseX-net: an end-to-end model for lymphoma segmentation in whole-body pet/ct images. IEEE Access 8, pp. 8004–8018. Cited by: §1, §1.
  • [8] H. Li, W. L. Thorstad, K. J. Biehl, R. Laforest, Y. Su, K. I. Shoghi, E. D. Donnelly, D. A. Low, and W. Lu (2008) A novel pet tumor delineation method based on adaptive region-growing and dual-front active contours. Medical physics 35 (8), pp. 3711–3721. Cited by: §1.
  • [9] C. Lian, S. Ruan, T. Denœux, H. Li, and P. Vera (2018) Joint tumor segmentation in pet-ct images using co-clustering and fusion based on belief functions. IEEE Transactions on Image Processing 28 (2), pp. 755–766. Cited by: §1.
  • [10] J. Long, E. Shelhamer, and T. Darrell (Jun, 2015) Fully convolutional networks for semantic segmentation. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    Boston, USA, pp. 3431–3440. Cited by: §1.
  • [11] U. Nestle, S. Kremp, A. Schaefer-Schuler, C. Sebastian-Welsch, D. Hellwig, C. Rübe, and C. Kirsch (2005) Comparison of different methods for delineation of 18f-fdg pet–positive tissue for target volume definition in radiotherapy of patients with non–small cell lung cancer. Journal of nuclear medicine 46 (8), pp. 1342–1348. Cited by: §1.
  • [12] O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, et al. (2018) Attention u-net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. Cited by: §1.
  • [13] O. Ronneberger, P. Fischer, and T. Brox (Oct, 2015) Convolutional networks for biomedical image segmentation. In Paper presented at: International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany. Cited by: §1, Figure 3, §3.2.
  • [14] G. Shafer (1976) A mathematical theory of evidence. Vol. 42, Princeton university press. Cited by: §1, §2.2.
  • [15] S. K. Sundararajan, B. Sankaragomathi, and D. S. Priya (2019) Deep belief cnn feature representation based content based image retrieval for medical images. Journal of medical systems 43 (6), pp. 1–9. Cited by: §1.
  • [16] G. Zeng, X. Yang, J. Li, L. Yu, P. Heng, and G. Zheng (2017) 3D u-net with multi-level deep supervision: fully automatic segmentation of proximal femur in 3d mr images. In

    International workshop on machine learning in medical imaging

    pp. 274–282. Cited by: Table 1.
  • [17] T. Zhou, S. Ruan, and S. Canu (2019) A review: deep learning for medical image segmentation using multi-modality fusion. Array 3, pp. 100004. Cited by: §1.
  • [18] W. Zhu, Y. Huang, H. Tang, Z. Qian, N. Du, W. Fan, and X. Xie (2018) Anatomynet: deep 3D squeeze-and-excitation u-nets for fast and fully automated whole-volume anatomical segmentation. bioRxiv, pp. 392969. Cited by: §1.