Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer

10/05/2018 ∙ by Ashnil Kumar, et al. ∙ 6

The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. However, current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis, e.g. region detection. We evaluated our CNN on a region detection problem using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image analysis (pre-fused inputs, multi-branch techniques, multi-channel techniques) and demonstrated that our approach had a significantly higher accuracy (p < 0.05) than the baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 5

page 7

page 8

page 12

page 13

page 14

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Medical imaging is a cornerstone of modern healthcare providing unique diagnostic, and increasingly therapeutic, capabilities that affect patient care. The range of medical imaging modalities is wide but in essence they provide anatomical and functional information about structure and physiopathology. The multi-modality F-Fluorodeoxyglucose (FDG) positron emission tomography and computed tomography (PET-CT) scanner is regarded as the imaging device of choice for the diagnosis, staging, and assessment of treatment response in many cancers [1]. PET-CT combines the sensitivity of PET to detect regions of abnormal function and the anatomical localization provided by CT [2]. With PET, sites of disease usually display greater FDG uptake (glucose metabolism) than normal structures. The spatial extent of the disease within a particular structure, however, cannot be accurately determined due to the inherent lower resolution of the PET when compared to CT and MR imaging, tumor heterogeneity, and the partial volume effect [3]. CT provides the anatomical localization of sites of abnormal FDG uptake in PET and so adds precision in the imaging interpretation [4]. One example clinical domain that has benefited greatly from PET-CT imaging is the evaluation of non-small cell lung cancer (NSCLC), the most common type of lung cancer. In NSCLC, the extent of the disease at diagnosis is the most important determinant of patient outcome; PET-CT is able to detect sites of disease where there are no abnormalities in the underlying structure on CT, hence its value in patient management [5, 6, 7, 8].

The role of PET-CT in cancer care has provoked extensive research into methods to detect, classify, and retrieve PET-CT images 

[9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. These methods are divided into two main categories: (i) methods that process each modality separately and then combine the modality-specific features [9, 10, 11, 12, 13, 14, 15, 16, 17, 18], and (ii) methods that combine or fuse complementary features from each modality [19, 20, 21, 22]. While our experimental evaluation in this paper is focused on region detection, we provide an overview of the broader field below and in Section II.

Methods that process each modality separately are inherently limited when the intent is to consider both the function and anatomical extent of disease. For example, in a chest CT depicting a lung tumor that is causing collapse in adjacent lung tissue, both the tumor and the collapse can appear identical. Similarly, some areas of high FDG uptake on PET images may be linked to normal physiological uptake, such as in the heart, and these regions need to be filtered out based upon knowledge about anatomical characteristics from CT to differentiate them from abnormal PET regions [23, 24, 25, 26]. In contrast, methods that fuse information from the two modalities often use a priori knowledge about characteristics of the different modalities to prioritize information from the two modalities for different tasks. For example, Song et al. [16] used both PET and CT for lesion characterization but disregarded PET features when computing spatial appearance features due to the low spatial resolution of PET. Alternatively, they may fuse information using a representation that models semantically-derived relationships between the two modalities. For example, a graph structure based upon the criteria from cancer staging manuals [6] was previously used to associate PET tumor features with the CT features of nearby anatomical structures [22]. As such, these fusion methods are highly dependent upon an external predefined specification of the relationship between the features from both modalities. Hence, the ability to derive an application-specific fusion would reduce this dependency.

Current image fusion strategies in the general (non-medical) domain derive spatially varying fusion weights from the local visual characteristics of the different image data [27]

. Features such as pixel variance, contrast, and color saturation are used to derive task-specific fusion ratios for different regions of interest (ROIs) within the images 

[28, 29]. These fusion methods can thus adapt to and prioritize different content at different locations in the images according to the underlying image features that are relevant to the different images being analyzed. This results in the capacity to enhance specific information for different image data. Adapting such spatially varying pixel-level fusion for PET-CT image analysis will enable the adaptive fusion of visual characteristics of different diseases (e.g. uptake in homogeneous vs. heterogeneous tumors) and different anatomical locations (e.g. tissue density in the lungs vs. mediastinum). Our hypothesis is that a spatially varying fusion of multi-modality image data that is derived from the underlying visual features will enable better integration of the complementary information in multi-modality images for automatic image analysis applications such as region detection.

The state-of-the-art in feature learning, selection, and extraction are deep learning methods [30, 31]. Convolutional neural networks (CNNs) [32]

are deep learning methods for object detection, classification, and analysis of image data. CNNs have shown superiority across this spectrum when compared to non-deep learning methods in various benchmarks, e.g., the ImageNet Large Scale Visual Recognition Challenge 

[33]

. This dominance relates to the ability of CNNs to implicitly learn image features that are ‘meaningful’ for a given task directly from the image data. In medical image analysis, CNNs have shown improved object detection, classification, and segmentation performance compared to traditional approaches such as support vector machines 

[34, 35, 36, 37, 38, 39, 40, 41].

For multi-modality medical image analysis, many investigators have used CNN-derived features, directly or with some tuning, that were obtained from training on natural (photographic) images [42, 10, 19, 11, 13]. CNNs that were designed for multi-modality medical images were either used as a filter after modality-specific processing [9], or alternatively focused on images of the same modality obtained with different acquisition protocols,e.g., T1- and T2-weighted magnetic resonance (MR) images showing different tissue properties [39, 40]. Other deep learning approaches have included auto-encoders to learn shared representations that capture the similarities in features across multiple modalities [41]

. In these studies, however, CNNs were generally used as feature extractors and classifiers without consideration of how the features from each modality were combined, thus relying either on pre-fusion of the input data or using multi-channel approaches where each input modality was initially processed by an independent CNN kernel (or weight tensor). In both circumstances, the images or features were fused without consideration of the spatially varying visual characteristics at different image locations. We provide further description of related studies in Section 

II.

In this paper, our aim is to improve fusion of the complementary information in multi-modality images for automatic medical image analysis. We present a new CNN that learns to fuse complementary anatomical and functional data from PET-CT images in a spatially varying manner, for the detection of different ROIs. The novelty of our CNN is its ability to produce a fusion map that explicitly quantifies the fusion weights for the features in each modality. This is in contrast to CNNs that use multi-channel inputs [19, 39] or modality-specific encoder branches [9, 41, 13], where modalities are implicitly fused. We employ an experimental comparison of our CNN with these baseline methods to empirically support our hypothesis that the spatially varying feature fusion enabled by our CNN enhances the detection of different ROIs in PET-CT lung cancer images.

Ii Related Work

The computerized analysis of multi-modality medical imaging has been a widely pursued area of research. In recent work, Bagci et al. [4] proposed a method to simultaneously delineate ROIs in PET, PET-CT, PET-MR imaging, and fused MR-PET-CT images using a random walk segmentation algorithm with an automated similarity-based seed selection process. Zhao et al. [14] combined dynamic thresholding, watershed segmentation, and support vector machine (SVM) classification to classify solitary pulmonary nodules on the basis of CT texture features and PET metabolic features. Similarly, Lartizien et al. [15]

used texture feature selection and SVM classification for staging of lymphoma patients based on their PET-CT imaging data. Y. Song et al. 

[16] and Q. Song et al. [17] used the context of PET and CT regions to characterize tumors with spatial and visual consistency. Han et al. [18] segmented tumors from PET-CT images, formulating the problem as a Markov Random Field with modality-specific energy terms for PET and CT characteristics. In our prior work [20, 21], we used multi-stage discriminative models to classify ROIs in the thoracic PET-CT images and in full-body lymphoma studies. In our PET-CT retrieval research [22, 43], we have also derived a graph-based model that attempts to bridge the semantic gap by modeling the spatial characteristics that are important for lung cancer staging [5].

Recently, the strength of deep learning methods for feature learning and pattern recognition 

[31, 30], and the strength of CNNs for image processing [32, 33] have provoked their application to medical image analysis. Many initial studies validated the applicability of deep learning to the medical domain using approaches that transfered features learned from a non-medical domain and tuning them to a specific medical task [34, 35], e.g., classification of the modality of the medical images depicted in research literature [36] and the localization of planes in fetal ultrasound images [38]. Later studies designed new CNNs for specific clinical challenges, such as the classification of interstitial lung disease in CT images [37].

In the multi-modality medical image analysis domain, MR images obtained with different acquisition protocols have been treated as multi-modality images, with the reasoning that the different MR images showed different aspects of the same anatomical structure [39, 40, 41]. Zhang et al. [39] designed a CNN-based segmentation approach for brain MR images. Tseng et al. [40] segmented ROIs with complementary features that were learned via a convolution across the different MR images. Van Tulder and de Bruijne [41] used an unsupervised approach to learn a shared data representation of MR images, which acted as a robust feature descriptor for classification applications. Liu et al. [42]

used a convolutional autoencoder to detect air, bone, and soft tissue for attenuation correction in PET-MR images. Teramoto et al. 

[9] used a CNN as a second stage classifier to determine if candidate lung nodules in PET-CT were false positives. Bi et al. [10] used domain-transferred CNNs to extract PET features for PET-CT lymphoma classification. Bradshaw et al. [19], fine-tuned a CNN, which was pre-trained on ImageNet data, for PET-CT images in a multi-channel input approach, using a CT slice and two maximum intensity projections of the PET data as the inputs. Xu et al. [11] cascaded two V-Nets [12] to detect bone lesions, using CT alone as the input to the first V-Net and a pre-fused PET-CT image for the second. Similarly, Zhong et al. [13] trained one U-Net [44] for PET and one for CT, combining the results using a graph cut algorithm. None of this prior work, however, considered how the visual characteristics, specific to each image at different locations, could be integrated in a spatially varying manner.

Iii Methods

Iii-a Materials

Our dataset comprised 50 FDG PET-CT scans of patients with pathologically proven NSCLC. The studies were acquired on a Biograph 128-slice mCT (PET-CT scanner; Siemens Healthineers, Hoffman Estates, Il, USA). The mCT is a high-resolution tomograph with high-definition reconstruction, time-of-flight and flow motion characteristics. The studies were performed in the Department of Molecular Imaging at the Royal Prince Alfred Hospital, Sydney, Australia. Each study comprised one CT volume and one PET volume: the CT resolution was pixels at 0.98mm 0.98mm, the PET resolution was 200 200 pixels at 4.07mm 4.07mm, with a slice thickness and an interslice distance of 3mm. Studies contained between 1 to 7 tumors (inclusive) in the thorax; we only used the slices from the thorax containing the ROIs (852 total slices). All data were de-identified. All images were rescaled to a resolution of pixels (x-y axes). The PET images were normalized by a transformation to the standard uptake value (SUV) to account for variation in PET tracer uptake related to isotope dose and patient body mass [45].

The ground truth was derived from the diagnostic imaging report which detailed the locations of the primary tumor and any involved thoracic lymph nodes. All reports were done by a single, experienced imaging specialist who has read over 80,000 PET and PET-CT scans. We used the report findings to drive a semi-automatic process for ROI labeling. We applied a commonly used adaptive thresholding algorithm [46] to extract the lung ground truth from the CT volume. Similarly, we used connected thresholding to coarsely determine the mediastinum. We extracted the tumor ground truth using 40% peak SUV connected thresholding to detect the ‘hot spots’ identified in the diagnostic reports  [47]. Minor manual adjustments of the thresholding parameters were done to ensure the ROIs corresponded to areas described in the report.

We randomly divided the 50 PET-CT studies into a training set of 40 studies (690 slices) and a separate test set of 10 studies (162 slices). The training set was further randomly subdivided into two groups, each comprising of 20 studies, for use in two-fold parameter validation (see Section III-F).

Iii-B Architecture Design

Fig. 1: The architecture of our CNN, comprising two modality-specific encoders, a co-learning component, and a reconstruction component; the black lines indicate inputs to operations as skip connections between non-adjacent layers.

Fig. 1 shows the architecture of our proposed CNN; note that the number alongside each feature map in the figure refers to the number of output channels in the feature map. Our CNN comprises four main components: two encoders (one for each modality), one co-learning and fusion component, and a reconstruction component. The purpose of the two encoders is to derive the image features that are most relevant to each specific image modality. The co-learning component uses the modality-specific features produced by the encoders to derive a spatially varying fusion map to weight the modality-specific features at different locations. Finally, the reconstruction component integrates the modality-specific fused features across multiple scales to produce the final prediction. The structure and behavior of these components are described in detail in the following subsections.

Iii-C Modality-Specific Encoders

Our CNN contains an encoder for PET images and a separate encoder for CT images. The purpose of each encoder is to extract the visual features that are relevant to the input image modality. Thus, the encoders were designed with stacked convolutional layers in a similar manner to the deep CNNs that have achieved high accuracy in image classification tasks, e.g., AlexNet [32] and VGGNet [48, 49]. As shown in Fig. 1

, each encoder comprises four blocks that each contain two convolutional layers for feature map generation and a max pooling layer to down-sample the feature maps.

A consequence of this stacked structure is that as the weights of each layer change, the distribution of the outputs they produce also change, potentially influencing the convolutional layers later in the network. During training, this means that even small changes in the weights of one layer may cascade and be amplified in deeper layers, requiring the layers to continuously adapt to new input distributions [50]. As our CNN includes inputs from two different imaging modalities, the co-learning and reconstruction components will be affected by the cascading weight changes from both encoders, which will slow convergence and thus hinder the learning process.

Let be the output feature map of a convolutional layer where is the input to the convolution layer, is the convolution operation, is the learned weights of the convolution layer, and

is the learned bias of the convolution layer. We use a batch normalization layer 

[51] to normalize every dimension of the output feature map to a distribution with zero mean and unit variance, which acts to reduce the impact when the feature map is used as an input for subsequent convolutional layers.

We use the leaky rectified linear unit (Leaky ReLU) activation function 

[52] after feature map normalization:

(1)

where is a normalized feature and is a parameter controlling the ‘leakiness’ of the activation function, with the constraint that

. The Leaky ReLU activation avoids the dead neuron problem that can occur with the standard ReLU function 

[52] where some weights in can be updated to a value where their training gradients are forever stuck at 0, thus preventing the weights from being updated in the future. The parameter enables the introduction of a small non-zero gradient when , thereby preventing the weights from being stuck at an unrecoverable value. For simplicity of notation, we refer to the output of a convolutional layer by as the feature map generated from after convolution, batch normalization, and activation.

Iii-D Multi-modality Feature Co-Learning and Fusion

Fig. 2: A conceptual example of the co-learning unit that learns to derive the fusion from the feature maps of each modality; for simplicity the figure shows co-learning on a single feature map channel.

The co-learning component consists of two parts: (i) a co-learning unit can be thought of as a CNN that learns to derive spatially varying fusion maps, and a (ii) fusion operation that uses the fusion maps to prioritize different features. Fig. 2 shows a conceptual example of the function of the multi-modality co-learning unit. The inputs to the co-learning unit are two feature maps and (each from a block of one modality-specific encoder), each of size with width, height, and channels. These feature maps are stacked to form , a tensor with number of modalities. The channels of are then convolved with the channels of a learnable 3D kernel of size , where is the width and height of the kernel, and is the number of modalities.

By performing a 3D convolution [53]

without padding the modality dimension, we obtain for a given channel

a feature map with a singleton third dimension where the value at location is determined from the neighborhood of both and :

(2)

We then squeeze the singleton third dimension to obtain an output feature map of size , the same width and height as the two modality-specific input feature maps and and double the number of channels, which is important for the weighting of modality-specific by the co-learned fusion maps as described below.

Our intention is that the co-learned fusion map controls the level of importance given to information from each modality at each location, in contrast to the global fusion ratio in PET-CT pixel intermixing [54, 55, 56]. Thus the co-learned fusion maps directly affect the input distribution of the learnable layers that immediately follow the co-learning unit. Hence, we do not normalize the output of the 3D convolution. As with the encoders (see Section III-C), we used a Leaky ReLU activation function to obtain the multi-modality co-learned fusion map:

(3)

where are the learned biases. Note that the multi-modality fusion map is obtained by the co-learning unit based on the spatial integration of the features from both modalities, since the 3D convolution operation considers the 3D neighborhood defined by the width, height, and modality of the stacked feature map .

The fusion operation (depicted in Fig. 3) integrates the modality-specific feature maps according to the values (coefficients) in the multi-modality fusion map, as follows:

(4)

where is the fused co-learned feature map, is the stacking operation, and is an element-wise multiplication. This process merges the two modality-specific feature maps and and weights them by the co-learned multi-modality fusion map , similar to pixel intermixing. Our CNN (Fig. 1) generates four fused feature maps, one for each pair of encoder blocks. These fused feature maps are passed to the reconstruction part of the CNN (see Section III-E).

Fig. 3: Multiplying the spatially varying fusion map () with the stacked modality-specific feature maps () to generate a fused co-learned feature map (). Element-wise multiplication ensures that each value in is a weighted form of a modality-specific feature. The red circles and blue circles indicate the element-wise multiplication of CT and PET features, respectively.

Iii-E Reconstruction

The reconstruction part of our CNN creates a prediction map of the ROIs within the PET-CT image. It does this by integrating the co-learned feature maps from different encoder blocks and upsampling them to the dimensions of the original inputs. Similar to the encoders, the reconstruction component comprises four blocks each with one upsampling layer, one deconvolutional layer [57] layer, and two convolutional layers.

The input to a reconstruction block is the output co-learned feature map from a co-learning unit stacked with the output of any prior reconstruction block. The upsampling layer first doubles the width and height of the stacked feature map using nearest neighbor interpolation to enable eventual reconstruction of the detected regions at the same scale as the original input. The deconvolution layer merges the information from the stacked modality-specific feature maps, which is further refined by the two convolutional layers. The concept behind each reconstruction block is to generate higher dimensional feature maps that better correspond to the features for different ROIs by merging lower dimensional information with features that were fused from multiple image modalities. As with the modality-specific encoders (see Section 

III-C), we use batch normalization [51] and Leaky ReLU [52] activations.

After the last reconstruction block, the output feature map has the same width and height as the input PET-CT image, with 64 channels in the third dimension. This is analogous to a final 64-dimensional feature vector for each pixel in the original image. We then use a 11 convolution to map these feature vectors into the number of ROIs (4 in our experimental set-up), obtaining for each pixel a vector

corresponding to the observed activations for each ROI class. Finally, we transform these observations into a probability or prediction map that corresponds to the likelihood of the pixel belonging to a particular ROI class using the softmax function 

[58]:

(5)

where is the probability that the pixel with observation vector belongs to the ROI , is the -th element of vector and is the activation corresponding to ROI , and is the total number of ROIs. Fig. 4 is an example of the prediction maps generated for four ROI classes: lung fields, mediastinum or soft tissue, tumors, and background (all other ROIs).

Fig. 4: The prediction maps generated by our CNN for the different regions; a higher (whiter) intensity implies a higher probability that a pixel belongs to a specific region. This example shows three regions and the background: (a) lung fields, (b) mediastinum, (c) tumors, and (d) background.

Iii-F Network Training

We trained our CNN using stochastic mini-batch stochastic gradient descent with momentum 

[59]

using the following loss function and training parameters. We used the training data as specified in Section 

III-A; to improve the robustness of our training and to avoid overfitting we applied data augmentation through the standard technique of random cropping and flipping of training samples [48, 36].

Architecture Parameter Value
2D convolution kernel size 33
2D deconvolution kernel size 33

convolution/deconvolution stride

1
max pool size 22
pool stride 2
number of channels () 64
3D convolution kernel size 332
Training Parameter Value
ReLU leakiness () 0.1
regularization strength () 0.1
learning rate 0.0001
momentum 0.9
batch size 5

# epochs

500
TABLE I: CNN Architecture and Training Parameters
Metrics [Mean Standard Deviation %]
ROI CNN Precision Sensitivity Specificity Accuracy

lungs

TB 77.46 6.10* 99.58 0.51 98.29 0.49* 98.36 0.46*
TC 74.58 6.87* 99.68 0.79 97.96 0.66* 98.07 0.60*
FS 80.19 6.40* 98.60 1.47* 98.56 0.50* 98.57 0.47*
our method 82.57 5.62 99.51 0.82 98.76 0.40 98.81 0.37

mediastinum

TB 62.86 13.46 92.00 8.33* 99.06 0.60 98.97 0.59
TC 62.17 14.80 93.55 5.47* 99.09 0.43 98.99 0.46
FS 56.17 14.27* 89.00 17.30* 98.82 0.58* 98.69 0.66*
our method 64.75 12.87 95.74 7.90 99.08 0.65 99.04 0.64

tumors

TB 57.70 28.72* 61.34 33.49* 99.86 0.12* 99.71 0.29*
TC 57.14 27.83* 63.16 36.14* 99.82 0.20* 99.72 0.23*
FS 45.30 24.43* 82.86 22.66* 99.73 0.20* 99.67 0.21*
our method 71.17 26.61 75.99 27.53 99.91 0.12 99.83 0.15

background

TB 99.86 0.19* 97.27 0.79* 98.50 1.80* 97.37 0.78*
TC 99.89 0.18 96.88 0.96* 98.85 1.69 97.04 0.92*
FS 99.81 0.29* 97.12 22.66* 98.02 0.20* 97.20 0.95*
our method 99.91 0.08 97.74 0.79 99.04 0.89 97.85 0.74

overall

TB 73.77 5.54* 96.46 3.53* 99.09 0.26* 99.02 0.32*
TC 71.62 6.20* 97.22 2.51* 98.97 0.30* 98.93 0.33*
FS 73.10 5.74* 96.25 3.24* 99.05 0.29* 98.98 0.33*
our method 78.16 5.17 98.00 1.80 99.26 0.26 99.23 0.29
  • , derived from a -test.

TABLE II: Comparison of CNNs

Iii-F1 Loss Function

We modified the well-established categorical cross-entropy loss function for training our CNN. Let be the set of pixel observations in an image and be the true class of , from a set of ROIs. Then our loss is given by:

(6)

where

(7)

is the class specific scaling, and

(8)

is the cross-entropy loss [37]. Under this formulation, is the number of pixels in the true ROI, is the number of pixels in ROI , is an indicator function that is 1 when and 0 otherwise, is defined by Equation 5, is the regularization strength, and is the -th weight in , the set of all weights in the CNN. The distribution of the number of pixels in each class varies depending on the particular ROI (e.g., there are many more lung pixels than there are tumor pixels). As such, in Equation 6 acts as a scaling coefficient for the cross entropy loss ; this formulation is designed to reduce any bias that may be caused by ROIs with different sizes (e.g., tumor ROIs are often much smaller than lung fields) [44]. The final term in Equation 6 is a regularization to reduce overfitting. Our aim was to ensure that the convolution kernel weights (and as a consequence, the features) corresponding to one modality did not overpower the weights (and the features) of the other. As such, we used an -regularization, which acts to prioritize lower weights across the entirety of  [60].

Iii-F2 Parameter Selection

We empirically derived the parameters using a two-fold cross-validation approach on the training data (see Section III-A). Table I lists the parameters used for our training. Further information on our parameter validation is provided in Section SI of the Supplementary Materials.

Iii-G Experimental Design

We implemented our CNN using Tensorflow 1.4 

[61] on a machine running Ubuntu 14.04 with CUDA 8.0 and CuDNN [62]. Training was performed on an 11GB NVIDIA GTX 1080 Ti.

We compared our fusion method to several baseline strategies for fusing information from multi-modality images. To limit the number of variable changes in our experimentation, for all baselines we used a similar architecture as in our method (Fig. 1), replacing the co-learning component with a fusion strategy from the literature. The baselines were:

  • A two-branch (TB) CNN, implementing a fusion strategy where each modality was processed separately and the outputs from each modality were combined [9, 41, 13]. The CNN was similar to the architecture in Fig. 1 with no co-learning component.

  • A two-channel (TC) input CNN, implementing a fusion strategy where each modality was treated as different channels of a single input [39, 19]. The CNN was similar to a single encoder form of the architecture in Fig. 1, with no co-learning component and the CT and PET modalities input as separate channels.

  • A fused (FS) input CNN, implementing a strategy where the input was a PET-CT image that had already been fused via pixel-intermixing [11]. The CNN was similar to a single encoder form of the architecture in Fig. 1 with no co-learning component.

We used the same training and test datasets for all experiments. We used greyscale inputs for both modalities, as was common in the baseline fusion strategies  [39, 19, 9] and other multi-modality CNN research [10, 42, 39, 40]. Our comparisons used the following metrics based on per-pixel overlap with the ground truth (GT): precision, sensitivity (recall), specificity, and accuracy. We computed the -value for these comparisons with the two-sample -test.

Fig. 5: Visual comparison of the results obtained by our method compared to the baselines and the ground truth (GT). For clarity, we show on the left the fused form of the original PET-CT image, with a color lookup table applied to the PET modality.
Fig. 6: Fusion maps obtained by the first co-learning unit in our CNN. For better visualization, each map has been independently normalized. Areas with higher intensity values represent fusion weights that are relatively more important than areas with lower intensity values.

Iv Results

Table II shows the precision, sensitivity, specificity, and accuracy of our method when compared with the baseline CNNs; the data are presented for each of the four types of ROI and collectively for all ROI. Our co-learning method has higher mean accuracy when compared to all baselines for all individual ROIs and overall. The improvement in accuracy offered by our method is statistically significant () for all cases except for the mediastinum. Our co-learning fusion method improves upon the baselines in 17 of the 20 metrics and 13 of these improvements are statistically significant. The largest overall improvement was in the precision metric, indicating that our method resulted in an increase in the ratio of true positives to false positives.

Fig. 5 is a visual comparison of the ROIs detected by our method and by the baselines; a larger version is included as Fig. S3 in the Supplementary Materials. The figure shows that our method consistently detected regions that were a similar size to the ground truth. In contrast, the TC baseline detected fewer pixels (as shown by the tumor region) while the TB and FS baselines detected more pixels than within the region. In particular, the TB CNN gave pixels within the chest wall a high probability of being within the mediastinum.

Fig. 6 depicts the co-learned fusion maps that were derived for an image with a single tumor; a larger version is included as Fig. S4 in the Supplementary Materials. In the figure, each feature map channel has been independently normalized so that their real valued pixels could be viewed in the paper. In any particular channel, a higher absolute intensity implies a greater importance placed on that pixel during fusion. The figure shows how different information are prioritized differently for each region. For example, the 7th CT fusion channel (row 1, column 7) places a greater emphasis on the lungs while the 26th PET fusion channel (row 8, column 2) places the greatest emphasis on the tumor. The figure also indicates that the fusion weights are derived from features of both modalities. For example, the 7th CT fusion channel (row 1, column 7) emphasizes the lungs including the area that contains the tumor. Meanwhile, the 13th CT fusion channel (row 2, column 5) also emphasizes the lungs but de-emphasizes the area containing the tumor. Further analyses are included in Section SIII of the Supplementary Materials.

V Discussion

Our findings show that our co-learning method for feature fusion results in improved overall accuracy (see Table II) and a more consistent detection of regions (see Fig. 5) when compared with the baseline CNNs. We attribute these findings to the ability of the co-learning unit to derive a spatially varying fusion map that more precisely integrates functional and anatomical visual features across different regions.

V-a Comparison with Baseline CNNs

Our co-learning CNN achieved a higher precision, sensitivity, specificity, and accuracy than the TB CNN for fusion across all regions and also overall. Our explanation for this outcome is that the design of our CNN explicitly fuses features at multiple scales through the multiple co-learning units, which prevents information loss that can occur from the standard pooling (downsampling) operations used for feature map dimensionality reduction in CNNs. In contrast, the TB CNN implements a late fusion approach in which modality-specific feature maps are merged just prior to the reconstruction, meaning that useful complementary information could possibly have already been lost. An examination of Fig. 5 shows that the TB CNN tends to have larger predicted regions compared to the GT (e.g. larger tumor area, additional regions in mediastinum), indicating that the lost complementary information makes the TB output less precise.

In a similar fashion, the TC CNN implements an early fusion approach in which no modality-specific feature maps are derived and where the first convolutional layer combines both modalities to derive fused feature maps. However, as indicated by the metrics in Table II and the images in Fig. 5 this tends to prioritize information from some modalities at the expense of information from the other modality. The clearest example is in the less precise detection of the tumor region, which is barely noticeable in Fig. 5; only the part of the tumor with peak SUV (highest radiotracer uptake) is detected and the less subtle tumor regions are missed altogether.

The FS baseline is another variant of early fusion; the PET and CT modalities are pre-fused via pixel intermixing and the intermixed image is used as the input. It shares a similar weakness to the TC CNN in that the pre-fusion acts to prioritize information from one modality at the expense of the others, resulting in high precision for the lungs (80.19% in Table II) but much lower precision for the other regions. Examination of Fig. 5 shows that the tumor and mediastinum regions detected by the FS CNN are larger than the GT, indicating that there are a greater number of false positives.

All approaches examined (baselines and our method), had consistently high overall specificity (Table II). This is expected due to the large background region in PET-CT images, caused by areas that are outside the field of view of the scanner. All the methods achieved a high precision () in detecting the background, correctly recognizing that background regions are distinct from other regions (i.e., they are true negatives). While all methods were able to discriminate between the background and other regions, our method had the best ability to discriminate between all the different regions.

V-B Importance of the Fusion Map

The manner of feature fusion is a key difference in our CNN versus the baseline CNNs. Our CNN derives a fusion map for each image that is explicitly multiplied across the feature maps of the different modalities (see Equation 4), thereby acting as feature weights. As such, our method can potentially derive different fusion maps for different input PET-CT images, prioritizing different characteristics at different locations. In contrast, all the baseline CNNs use the convolution operation to fuse the different modalities; each channel is convolved with its own learned kernel and the results of each channels’ convolutions are added together. Our CNN also involves such convolutions but they occur after the prioritization of information by multiplication with the fusion map.

The fusion maps shown in Fig. 6 indicate that our method prioritizes different information at different locations. For example, the 7th CT fusion channel (row 1, column 7) and the 13th CT fusion channel (row 2, column 5) emphasize the lung fields relative to the area containing the tumor. We suggest the co-learning unit has produced these specific fusion channels because (in combination with other channels) they contain information to distinguish the lung fields from any tumors they may contain. Similar patterns are noticed in the fusion maps of other PET-CT images. While it may appear that several channels in the fusion map are redundant (similar in appearance to other channels), this is merely a visualization issue caused by normalizing 32-bit floating point greyscale images for display within the paper. As shown in Fig. S4 in the Supplementary Materials, PET fusion channels 33 to 37 (row 13, columns 1 to 5) appear visually similar but closer examination of the distribution of fusion weights within the images indicates that each channel is prioritizes information in subtly different ways. Section SIII in the Supplementary Materials contains a detailed example showing the differences in these visually similar fusion channels and their impact when considering inputs with heterogeneous tumors. We suggest that the capacity of our co-learning CNN to derive these subtly different fusion weights enables more precise integration of the complementary information in each modality.

V-C Directions for Future Work

In our experiments, we compared our co-learning concept for fusion to other fusion approaches. To focus mainly on the differences in the approach to fusion, we built variant baseline CNNs that were similar to our own CNN but that implemented fusion in a different manner. This was done so that the main difference between the baselines and our CNN was the presence of our co-learning component, limiting the number of architectural differences. It also meant that we could use similar hyperparameters for fairer experimental comparisons. Our findings indicate that the addition of the co-learning component improved the final results and as such we suggest that other CNNs may also see improvements if they were to follow a similar conceptual approach for feature fusion; we have left this for future research.

Similarly, we suggest our co-learning CNN could also be extended or adapted to be better optimized for different datasets and applications. Such extensions could be the inclusion of improved encoders that go beyond stacked CNNs by borrowing designs from Residual [63], Inception [64]

, or other newer CNN architectures; enhanced application-specific encoders would better optimize the feature extraction for different applications. Similarly, the co-learning unit could also be similarly adapted such as by using multiple stacked convolutions to derive fusion maps with even finer refinements. Finally, it is expected that the final blocks of the reconstruction component will be redesigned for a different applications, e.g., such as by using fully connected layers or global average pooling 

[65] for classification applications. We will examine some of these extensions and adaptations in our future research.

In addition, we used greyscale inputs for all experiments rather than use color lookup tables (CLUTs) for PET. CLUTs are sometimes used to enhance the appearance of functional information, particularly in image visualization. Our experimental aim was to focus on how the information from each modality was prioritized and the colorization of PET may have biased the functional information. However, we acknowledge that color information may provide additional visual features and we will explore this in a future study.

The evaluation of our co-learning CNN examined its performance on PET-CT lung cancer images. Other datasets of different body regions or diseases may prioritize a different set of anatomical and functional characteristics. Our findings show that our CNN was able to detect regions that were mainly dependent on single modality information (e.g. lungs relying mainly on anatomical information from CT) as well as those that dependent on multi-modality information (e.g., tumors adjacent to the different anatomical structures). This suggests that our method can be trained to adapt to the underlying multi-modality information important for different regions in different datasets. We will examine the behavior of our CNN across different datasets in future work.

Vi Conclusion

We presented a new supervised CNN for fusing complementary information from multi-modality images. Our CNN leveraged modality-specific features to derive a spatially varying fusion map that quantified the importance of each modality’s features across different spatial locations. Our findings from region detection experiments on PET-CT lung cancer images demonstrated that our approach achieved a significantly higher accuracy () than several baseline CNN-based methods for multi-modality image analysis. We suggest that our conceptual approach of having a specific CNN architectural component to derive explicit fusion maps could be a useful technique for medical image analysis applications that requires considering complementary information from different image modalities, e.g. PET-CT and PET-MR.

References

  • [1] S. Kligerman and S. Digumarthy, “Staging of non–small cell lung cancer using integrated PET/CT,” Am J Roentgenol, vol. 193, no. 5, pp. 1203–1211, 2009.
  • [2] T. M. Blodgett, C. C. Meltzer, and D. W. Townsend, “PET/CT: Form and Function,” Radiology, vol. 242, no. 2, pp. 360–385, 2007.
  • [3] R. L. Wahl, H. Jacene, Y. Kasamon, and M. A. Lodge, “From RECIST to PERCIST: Evolving Considerations for PET Response Criteria in Solid Tumors,” J Nucl Med, vol. 50, no. Suppl 1, pp. 122S–150S, 2009.
  • [4] U. Bagci, J. K. Udupa, N. Mendhiratta, B. Foster, Z. Xu, J. Yao, X. Chen, and D. J. Mollura, “Joint segmentation of anatomical and functional images: Applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images,” Med Image Anal, vol. 17, no. 8, pp. 929–945, 2013.
  • [5] F. C. Detterbeck, D. J. Boffa, and L. T. Tanoue, “The new lung cancer staging system,” Chest, vol. 136, no. 1, pp. 260–271, 2009.
  • [6] S. B. Edge, D. R. Byrd, C. C. Compton, A. G. Frtiz, F. L. Greene, and A. Trotti, Eds., AJCC Cancer Staging Manual.   Springer New York, 2010.
  • [7] S. B. Edge and C. C. Compton, “The American Joint Committee on Cancer: the 7th Edition of the AJCC Cancer Staging Manual and the Future of TNM,” Ann Surg Oncol, vol. 17, pp. 1471–1474, 2010.
  • [8] E. Tatci, O. Ozmen, Y. Dadali, I. U. Biner, A. Gokcek, F. Demirag, F. Incekara, and N. Arslan, “The role of FDG PET/CT in evaluation of mediastinal masses and neurogenic tumors of chest wall,” Int J Clin Exp Med, vol. 8, no. 7, pp. 11 146–52, 2015.
  • [9] A. Teramoto, H. Fujita, O. Yamamuro, and T. Tamaki, “Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique,” Med Phys, vol. 43, no. 6, pp. 2821–2827, 2016.
  • [10] L. Bi, J. Kim, A. Kumar, L. Wen, D. Feng, and M. Fulham, “Automatic detection and classification of regions of FDG uptake in whole-body PET-CT lymphoma studies,” Comput Med Imag Grap, vol. 60, pp. 3–10, 2017.
  • [11] L. Xu, G. Tetteh, J. Lipkova, Y. Zhao, H. Li, P. Christ, M. Piraud, A. Buck, K. Shi, and B. H. Menze, “Automated whole-body bone lesion detection for multiple myeloma on 68Ga-Pentixafor PET/CT imaging using deep learning methods,” Contrast Media Mol I, vol. 2018, p. 11, 2018.
  • [12] F. Milletari, N. Navab, and S. A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in Fourth International Conference on 3D Vision (3DV), 2016, pp. 565–571.
  • [13] Z. Zhong, Y. Kim, L. Zhou, K. Plichta, B. Allen, J. Buatti, and X. Wu, “3D fully convolutional networks for co-segmentation of tumors on PET-CT images,” in IEEE ISBI, 2018, pp. 228–231.
  • [14] J. Zhao, G. Ji, Y. Qiang, X. Han, B. Pei, and Z. Shi, “A new method of detecting pulmonary nodules with PET/CT based on an improved watershed algorithm,” PLOS ONE, vol. 10, no. 4, pp. 1–15, 2015.
  • [15] C. Lartizien, M. Rogez, E. Niaf, and F. Ricard, “Computer-aided staging of lymphoma patients with FDG PET/CT imaging based on textural information,” IEEE J Biomed Health, vol. 18, no. 3, pp. 946–955, 2014.
  • [16] Y. Song, W. Cai, H. Huang, X. Wang, Y. Zhou, M. J. Fulham, and D. D. Feng, “Lesion detection and characterization with context driven approximation in thoracic FDG PET-CT images of NSCLC studies,” IEEE T Med Imaging, vol. 33, no. 2, pp. 408–421, 2014.
  • [17] Q. Song, J. Bai, D. Han, S. Bhatia, W. Sun, W. Rockey, J. E. Bayouth, J. M. Buatti, and X. Wu, “Optimal co-segmentation of tumor in PET-CT images with context information,” IEEE T Med Imaging, vol. 32, no. 9, pp. 1685–1697, 2013.
  • [18] D. Han, J. Bayouth, Q. Song, A. Taurani, M. Sonka, J. Buatti, and X. Wu, “Globally optimal tumor segmentation in PET-CT images: A graph-based co-segmentation method,” in Information Processing in Medical Imaging.   Springer Berlin Heidelberg, 2011, pp. 245–256.
  • [19] T. Bradshaw, T. Perk, S. Chen, H.-J. Im, S. Cho, S. Perlman, and R. Jeraj, “Deep learning for classification of benign and malignant bone lesions in [F-18]NaF PET/CT images,” J Nucl Med, vol. 59, no. S1, p. 327, 2018.
  • [20] Y. Song, W. Cai, J. Kim, and D. D. Feng, “A multistage discriminative model for tumor and lymph node detection in thoracic images,” IEEE T Med Imaging, vol. 31, no. 5, pp. 1061–1075, 2012.
  • [21] L. Bi, J. Kim, D. Feng, and M. Fulham, “Multi-stage thresholded region classification for whole-body PET-CT lymphoma studies,” in MICCAI, 2014, pp. 569–576.
  • [22] A. Kumar, J. Kim, L. Wen, M. Fulham, and D. Feng, “A graph-based approach for the retrieval of multi-modality medical images,” Med Image Anal, vol. 18, no. 2, pp. 330–342, 2014.
  • [23] R. Boellaard, R. Delgado-Bolton, W. J. G. Oyen, F. Giammarile, K. Tatsch, W. Eschner, F. J. Verzijlbergen, S. F. Barrington, L. C. Pike, W. A. Weber, S. Stroobants, D. Delbeke, K. J. Donohoe, S. Holbrook, M. M. Graham, G. Testanera, O. S. Hoekstra, J. Zijlstra, E. Visser, C. J. Hoekstra, J. Pruim, A. Willemsen, B. Arends, J. Kotzerke, A. Bockisch, T. Beyer, A. Chiti, and B. J. Krause, “FDG PET/CT: EANM procedure guidelines for tumour imaging: version 2.0,” Eur J Nucl Med Mol I, vol. 42, no. 2, pp. 328–354, 2015.
  • [24] A. Kroiss, D. Putzer, C. Decristoforo, C. Uprimny, B. Warwitz, B. Nilica, M. Gabriel, D. Kendler, D. Waitz, G. Widmann, and I. J. Virgolini, “68Ga-DOTA-TOC uptake in neuroendocrine tumour and healthy tissue: differentiation of physiological uptake and pathological processes in PET/CT,” Eur J Nucl Med Mol I, vol. 40, no. 4, pp. 514–523, 2013.
  • [25] S. J. Rosenbaum, T. Lind, G. Antoch, and A. Bockisch, “False-positive FDG PET uptake-the role of PET/CT,” Eur Radiol, vol. 16, no. 5, pp. 1054–1065, 2006.
  • [26] T. M. Blodgett, M. B. Fukui, C. H. Snyderman, B. F. Branstetter, B. M. McCook, D. W. Townsend, and C. C. Meltzer, “Combined PET-CT in the head and neck,” RadioGraphics, vol. 25, no. 4, pp. 897–912, 2005.
  • [27] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” Inform Fusion, vol. 33, pp. 100–112, 2017.
  • [28] M. Kumar and S. Dass, “A total variation-based algorithm for pixel-level image fusion,” IEEE T Image Process, vol. 18, no. 9, pp. 2137–2143, 2009.
  • [29] R. Shen, I. Cheng, and A. Basu, “QoE-based multi-exposure fusion in hierarchical multivariate gaussian CRF,” IEEE T Image Process, vol. 22, no. 6, pp. 2469–2478, 2013.
  • [30] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE T Pattern Anal, vol. 35, no. 8, pp. 1798–1828, 2013.
  • [31] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 05 2015.
  • [32] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, 2012, pp. 1097–1105.
  • [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,”

    Int J Comput Vision

    , vol. 115, no. 3, pp. 211–252, 2015.
  • [34]

    H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,”

    IEEE T Med Imaging, vol. 35, no. 5, pp. 1285–1298, 2016.
  • [35] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, and J. Liang, “Convolutional neural networks for medical image analysis: Full training or fine tuning?” IEEE T Med Imaging, vol. 35, no. 5, pp. 1299–1312, 2016.
  • [36] A. Kumar, J. Kim, D. Lyndon, M. Fulham, and D. Feng, “An ensemble of fine-tuned convolutional neural networks for medical image classification,” IEEE Jo Biomed Health, vol. 21, no. 1, pp. 31–40, 2017.
  • [37] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and S. Mougiakakou, “Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,” IEEE T Med Imaging, vol. 35, no. 5, pp. 1207–1216, 2016.
  • [38] H. Chen, D. Ni, J. Qin, S. Li, X. Yang, T. Wang, and P. A. Heng, “Standard plane localization in fetal ultrasound via domain transferred deep neural networks,” IEEE J Biomed Health, vol. 19, no. 5, pp. 1627–1636, 2015.
  • [39] W. Zhang, R. Li, H. Deng, L. Wang, W. Lin, S. Ji, and D. Shen, “Deep convolutional neural networks for multi-modality isointense infant brain image segmentation,” NeuroImage, vol. 108, pp. 214–224, 2015.
  • [40] K.-L. Tseng, Y.-L. Lin, W. Hsu, and C.-Y. Huang, “Joint sequence learning and cross-modality convolution for 3D biomedical segmentation,” in IEEE CVPR, 2017, pp. 3739–3746.
  • [41] G. van Tulder and M. de Bruijne, “Representation learning for cross-modality classification,” in International MICCAI Workshop on Medical Computer Vision, Cham, 2017, pp. 126–136.
  • [42] F. Liu, H. Jang, R. Kijowski, T. Bradshaw, and A. B. McMillan, “Deep learning MR imaging–based attenuation correction for PET/MR imaging,” Radiology, vol. 286, no. 2, pp. 676–684, 2018.
  • [43]

    A. Kumar, J. Kim, M. Fulham, and D. Feng, “Efficient PET-CT image retrieval using graphs embedded into a vector space,” in

    IEEE EMBC, 2014, pp. 1901–1904.
  • [44] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in MICCAI.   Springer, 2015, pp. 234–241.
  • [45] J. A. Thie, “Understanding the standardized uptake value, its methods, and implications for usage,” J Nucl Med, vol. 45, no. 9, pp. 1431–4, 2004.
  • [46] S. Hu, E. Hoffman, and J. Reinhardt, “Automatic lung segmentation for accurate quantitation of volumetric X-ray CT images,” IEEE T Med Imaging, vol. 20, no. 6, pp. 490 –498, 2001.
  • [47] J. Bradley, W. L. Thorstad, S. Mutic, T. R. Miller, F. Dehdashti, B. A. Siegel, W. Bosch, and R. J. Bertrand, “Impact of FDG-PET on radiation therapy volume delineation in non-small-cell lung cancer.” Int J Radiat Oncol Biol Phys, vol. 59, no. 1, pp. 78–86, 2004.
  • [48] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: Delving deep into convolutional nets,” arXiv preprint arXiv:1405.3531, 2014.
  • [49] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • [50] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” J Stat Plan Infer, vol. 90, no. 2, pp. 227–244, 2000.
  • [51] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in ICML, vol. 37, 2015, pp. 448–456.
  • [52] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in ICML Workshop on Deep Learning for Audio, Speech and Language, vol. 30, no. 1, 2013, p. 3.
  • [53] S. Ji, W. Xu, M. Yang, and K. Yu, “3D convolutional neural networks for human action recognition,” IEEE T Pattern Anal, vol. 35, no. 1, pp. 221–231, 2013.
  • [54] W. Cai and G. Sakas, “Data intermixing and multi‐volume rendering,” Comput Graph Forum, vol. 18, no. 3, pp. 359–368, 1999.
  • [55] A. Quon, S. Napel, C. F. Beaulieu, and S. S. Gambhir, ““flying through” and “flying around” a PET/CT scan: Pilot study and development of 3D integrated 18F-FDG PET/CT for virtual bronchoscopy and colonoscopy,” J Nucl Med, vol. 47, no. 7, pp. 1081–1087, 2006.
  • [56] R. Cheirsilp, R. Bascom, T. W. Allen, and W. E. Higgins, “Thoracic cavity definition for 3D PET/CT analysis and visualization,” Comput Biol Med, vol. 62, pp. 222–238, 2015.
  • [57] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” arXiv preprint arXiv:1603.07285, 2016.
  • [58] J. C. Fernandez Caballero, F. J. Martinez, C. Hervas, and P. A. Gutierrez, “Sensitivity versus accuracy in multiclass problems using memetic pareto evolutionary neural networks,” IEEE T Neural Networ, vol. 21, no. 5, pp. 750–770, 2010.
  • [59] I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in ICML, 2013, pp. 1139–1147.
  • [60] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in Advances in Neural Information Processing Systems 28, 2015, pp. 1135–1143.
  • [61]

    M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: A system for large-scale machine learning,” in

    Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, 2016, pp. 265–283.
  • [62] S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, “cuDNN: Efficient primitives for deep learning,” arXiv preprint arXiv:1410.0759, 2014.
  • [63] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE CVPR, 2016, pp. 770–778.
  • [64]

    C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning.” in

    Proc. AAAI Conf Artificial Intelligence

    , vol. 4, 2017, p. 12.
  • [65] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in IEEE ICCV, 2017, pp. 618–626.

Si Verification of Parameter Settings

We used a two-fold cross-validation approach to verify that the architecture and training parameters were appropriate for our dataset. The two folds each comprised the slices from 20 studies, taken from the 40 studies in full training dataset (see Section III-A in the main body of the paper). Slices from the same study were restricted to the same fold, i.e., no slices from one study appeared in both folds. As the studies in the folds were randomly selected, each fold contained a different number of individual slices. For this reason, the second fold had a larger number of iterations compared to the first fold over the same number of epochs. Fig. S1 shows the Tensorboard logs of the training accuracy across both folds, recorded every 20 iterations. Similarly, Fig. S2 shows the Tensorboard logs of the validation accuracy across both folds, recorded every 20 iterations. The figures show very similar levels of training and validation accuracy across both folds (0.943 training, 0.951 validation for Fold 1; 0.951 training, 0.947 validation for Fold 2), despite the difference in fold size, indicating that the parameters of our architecture resulted in stable learning.

Fig. S1: Tensorboard output for the training accuracy in both folds.
Fig. S2: Tensorboard output for the validation accuracy in both folds.

Sii Results: Larger Images

Figure S3 is a larger version of Figure 5 from the main text to show the results in greater detail.

Fig. S3: Larger rendering of the visual results in the main text.

Figure S4 is a larger version of Figure 6 from the main text, showing all 128 fusion maps from the first colearning unit.

Fig. S4: Larger rendering of the fusion maps in the main text.

Siii Fusion Maps

Figure S5 is an analysis of the fusion maps generated by our co-learning units. In this example, we examine the distribution of pixels within the tumor region within the PET image and three channels of the generated fusion map.

Fig. S5: Examination of tumor in the PET image against the fusion map characteristics within the tumor region.

The tumor region has high intensity compared to the other parts of the image and as such it is difficult to visually ascertain the differences among the fusion maps. As such, we used the ground truth tumor region and calculated the intensity histogram for the pixels within the tumor in the PET image and in the fusion maps; these histograms are also shown in Figure S5.

The histogram of the PET image shows that the tumor is heterogeneous, with a maximum SUV of approximately 20. The mode of the tumor pixels are at the tail end of the distribution (approximate SUV of 10), below the mean SUV of 12. Overall the distribution of the tumor is skewed towards the lower SUV values.

The fusion maps all have distributions that are different to the tumor’s original SUV distribution and are also distinct from each other. The tumor region within Fusion Map A has a relatively homogeneous distribution where the weights are clustered: the minimum and maximum fusion weights (coefficients) are within two standard deviations of the mean. A potential interpretation is that this fusion map differentiates the tumor region from the surrounding non-tumor areas, which have lower intensity as seen in the image. In contrast, Fusion Maps B and C have histograms showing heterogeneous distributions; neither of these match the distribution pattern of the original tumor’s SUV distribution, implying that they are each prioritizing different aspects of the tumor region.

Siv Source Code

The source code and documentation for our CNN can be found on https://github.com/ashnilkumar/colearn.