Medical imaging plays an important role in a variety of clinical applications. In particular, magnetic resonance imaging (MRI) is a versatile and noninvasive imaging technique extensively used in disease diagnosis, segmentation and other tasks [48, 46, 50, 21, 8]. MRI comes in several modalities, such as -weighted (), -with-contrast-enhanced (), -weighted (), and -fluid-attenuated inversion recovery (). Because each modality captures specific characteristics of the underlying anatomical information, combining multiple complementary modalities can provide a highly comprehensive set of data. Thus, several studies have focused on integrating the strengths of multiple modalities by exploring their rich information and discovering the underlying correlations among them, as a means of improving various medical tasks [20, 55, 32, 54, 35].
In clinical application, however, acquiring multiple MR imaging modalities is often challenging for a variety of reasons, such as scan cost, limited availability of scanning time, and safety considerations. This inevitably results in incomplete datasets and adversely affects the quality of diagnosis and treatment in clinic analysis. Since most existing methods are not designed to cope with missing modalities, they often fail under these conditions. One solution is to simply discard samples that have one or more missing modalities and perform tasks using the remaining samples with complete multi-modal data. However, this simple strategy disregards lots of useful information contained in the discarded samples and also escalates the small-sample-size issue. To overcome this, cross-modal medical image synthesis has gained widespread popularity, as it enables missing modalities to be produced artificially without requiring the actual scans. Currently, many learn-based synthesis methods have been proposed and obtained promising performance [17, 36, 15, 7, 27, 38].
Currently, a large portion of medical image synthesis methods work on single-modality data [15, 14, 27, 39, 38, 6]. However, because multiple modalities are often used in medical applications, and based on the principle “more modalities provide more information”, several studies have begun investigating multi-modal data synthesis. For example, Chartsias et al. proposed a multi-input multi-output method for MRI synthesis. Olut et al.  proposed a synthesis model for generating MR angiography sequences using the available and images. Yang et al. 
proposed a bi-modal medical image synthesis method based on a sequential GAN model and semi-supervised learning. However, there still remains a general dearth of methods that use multi-modal data as input to synthesize medical images. To achieve a multi-modal synthesis, one critical challenge is effectively fusing the various inputs. One fusion strategy is to learn a shared representation[41, 3, 30]. For example, Ngiam et al. 
used a bi-modal deep autoencoder to fuse auditory and visual data, employing shared representation learning to then reconstruct the two types of inputs. Shared representation learning has proven particularly effective for exploiting the correlations among multi-modal data. However, while exploiting these correlations is important, preserving modality-specific properties is also essential for the multi-modal learning task, thus it is challenging to automatically balance the two aspects.
To this end, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which synthesizes the target (or missing) modality images by fusing the existing ones. Specifically, our model first learns a modality-specific network to capture information from each individual modality. This network is formed as an autoencoder to effectively learn the high-level feature representations. Then, a fusion network is proposed to exploit the correlations among multiple modalities. In addition, we also propose a layer-wise multi-modal fusion strategy that can effectively exploit the correlations among different feature layers. Furthermore, an MFB is presented to adaptively weight different fusion strategies (i.e., element-wise summation, product, and maximization). Finally, our Hi-Net combines the modality-specific networks and fusion network to learn a latent representation for the various modalities, in which it is used to generate the target images. The effectiveness of the proposed synthesis method is validated by comparing it with various existing state-of-the-art methods 111Code is publicly available at: https://github.com/taozh2017/HiNet.
The main contributions of this paper are listed as follows.
Different from most existing single-modality synthesis approaches, we present a novel medical image synthesis framework that uses multiple modalities to synthesize target-modality images.
Our model captures individual modality features through the modality-specific network, as well as exploits the correlations among multiple modalities using a layer-wise multi-modal fusion strategy to effectively integrate multi-modal multi-level representations.
A novel MFB module is proposed to adaptively weight the different fusion strategies, effectively improving the fusion performance.
The rest of this paper is organized as follows. We introduce some related works in Section II. Then, we describe the framework of our proposed Hi-Net for medical image synthesis in Section III. We further present the experimental settings, experimental results, and discussion in Section IV. Finally, we conclude the paper in Section V.
Ii Related Works
We review some related works on the cross-modal synthesis, medical image synthesis, and multi-modal learning below.
. The key idea is to conduct continuous adversarial learning between a generator and a discriminator, where the generator tries to produce images that are as realistic as possible, while the discriminator tries to distinguish the generated images from real ones. For example, Pix2pix focuses on pixel-to-pixel image synthesis based on paired data, reinforcing the pixel-to-pixel similarity between the real and the synthesized images. The conditional GAN  was proposed to learn a similar translation mapping under a conditional framework to capture structural information. Besides, CycleGAN  was proposed to generalize the conditional GAN and can be applied to unpaired data.
Medical image synthesis
. Several machine learning-based synthesis methods have been developed. Traditional synthesis methods are often regarded aspatch-based regression tasks [17, 36], which take a patch of an image or volume from one modality to predict the intensity of a corresponding patch in a target image. Next, a regression forest was utilized to regress target-modality patches from these given-modality patches. In addition to the patch-based regression models, there has also been rapid development in sparse representation for medical image synthesis [15, 14]. In , Huang et al.
proposed a weakly supervised joint convolutional sparse coding to simultaneously solve the problems of super-resolution (SR) and cross-modality image synthesis. Another popular type of medical image synthesis method is theatlas-based model . These methods [31, 1]
adopted the paired image atlases from the source- and target-modalities to calculate the atlas-to-image transformation in the source-modality, which is then applied for synthesizing target-modality-like images from their corresponding target-modality atlases. More recently, deep learning has been widely applied in medical image analysis, achieving promising results. For the image synthesis task, Donget al. 
proposed an end-to-end mapping between low/high-resolution images using convolutional neural networks (CNNs). Liet al. 
used a deep learning model to estimate the missing positron emission tomography (PET) data from the corresponding MRI data. In addition, GAN-based methods have also achieved promising results in synthesizing various types of medical images, such as CT images, retinal images [5, 4], MR images , ultrasound images [34, 42] and so on. For example, Nie et al.  utilized MR images to synthesize computed tomography (CT) images with a context-aware GAN model. Wolterink et al.  utilized GAN to transform low-dose CT into routine-dose CT images. Wang et al.  also demonstrated promising results when using a GAN to estimate high-dose PET images from low-dose ones. The list of GAN-based methods proposed for medical synthesis is extensive [43, 11, 6, 46, 45].
Multi-modal learning. Many real-world applications involve multi-modal learning, since data usually can often be obtained from multiple modalities. Due to the effectiveness of exploring the complementarity among multiple modalities, multi-modal learning has attracted increased attention recently. One popular strategy is to find a new space of common components that can be more robust than any of the input features from different modalities. For example, canonical correlation analysis (CCA)  projects the features of each modality to a lower-dimensional subspace. Multiple kernel learning (MKL)  utilizes a set of predefined kernels from multi-view data and integrates these modalities using the optimized weights. In addition, there are several works that have applied deep networks to multi-modal learning. For example, Zhou et al. 
presented a three-stage deep feature learning framework to detect disease status via fusing MRI, PET, and single-nucleotide polymorphism (SNP) data. Nieet al.  proposed a 3D deep learning model to predict the overall survival time for brain gliomas patients by fusing
MRI, functional (fMRI) and diffusion tensor imaging (DTI) data. Wanget al.  proposed a multi-modal deep learning framework for RGB-D object recognition, which simultaneously learns transformation matrices for two modalities and a maximal cross-modality correlation criterion. Hou et al.  proposed a high-order polynomial multilinear pooling block to achieve multi-modal feature fusion, in which a hierarchical polynomial fusion network can flexibly fuse the mixed features across both time and modality domains. Some of the above methods [51, 28] focus on fusing the features from multi-modalities in high-level layers, while our model proposes a hybrid-fusion network to integrate multi-modal multi-level representations.
In this section, we provide details for the proposed Hi-Net, which is comprised of three main components: the modality-specific network, multi-modal fusion network, and multi-modal synthesis network (consisting of a generator and a discriminator).
Iii-a Modality-specific Network
In multi-modal learning, complementary information and correlations from multiple modalities are expected to boost the learning performance. Thus, it is critical to exploit the underlying correlations among multiple modalities, while also capturing the modality-specific information to preserve their properties. To achieve this goal, we first construct a modality-specific network for each individual modality (e.g., ), as shown in Fig. 1. Thus, the high-level feature representation for the -th modality can be represented as , where
denotes the network parameters. To learn a meaningful and effective high-level representation, we adopt an autoencoder-like structure to reconstruct the original image using the learned high-level representation. To do so, we have the following reconstruction loss function:
where denotes the reconstructed image of , and denotes the corresponding network parameters. Besides, we also use the -norm to measure the difference between the original and reconstructed images. It is worth noting that the reconstruction loss provides side-output supervision to guarantee that the modality-specific network learns a discriminative representation for each individual modality.
A detailed illustration of the modality-specific network can be found in Fig. 1. In each convolutional layer, we use a
filter with stride 1 and padding 1. Besides, we also introduce batch normalization after each convolutional layer. Specifically, each batch is normalized during the training procedure using its mean and standard deviation, and then global statistics are computed from these values. After the batch normalization, the activation functionsLeakyReLu and ReLu are used in the encoder and decoder, respectively. The pooling and upsampling layers use filters.
Iii-B Multi-modal Fusion Network
Most existing multi-modal learning approaches employ one of two multi-modal feature fusion strategies, i.e., early fusion or late fusion. Early fusion directly stacks all raw data and then feeds into a single deep network, while late fusion first extracts high-level features from each modality and then combines them using a concatenation layer. To effectively exploit the correlations between multi-level representations from different layers (e.g., shallow layers and high-level layers) and reduce the diversity between different modalities, we propose a layer-wise fusion network. Moreover, an MFB module is also proposed to adaptively weight different inputs from various modalities. As shown in Fig. 1, the feature representations from the first pooling layer of each modality-specific network are fed into an MFB block, then the output of this front MFB module is input into the next MFB module with the feature representations of the second pooling layer of the modality-specific network. Thus, we have three MFB modules in the fusion network, as shown in Fig. 1. It is worth noting that the layer-wise fusion is independent of the modality-specific network, thus it does not disturb the modality-specific structure and only learns the underlying correlations among the modalities. Besides, the proposed layer-wise fusion is conducted in different layers, thus our model can exploit the correlations among multiple modalities using low-level as well as high-level features.
Fig. 2 provides an illustration of the MFB module, where an adaptive weight network is designed to fuse the feature representations from multiple modalities. In the multi-modal fusion task, popular strategies include element-wise summation, element-wise product and element-wise maximization. However, it is not clear which is best for different tasks. Thus, to benefit from the advantages of each strategy, we simultaneously employ all three fusion strategies and then concatenate them. Then, a convolutional layer is added to adaptively weight the three fusions. As shown in Fig. 2, we obtain the output for the -th pooling layer of the -th modality, where is the number of feature channels, and and denote the width and height of the feature maps, respectively. Then, we apply the three fusion strategies to the inputs to obtain
where “”, “” and “Max” denote element-wise summation, element-wise product and element-wise maximization operations, respectively. Then, we combine them as . is then fed into the first convolutional layer (i.e., Conv1 in Fig. 2). The output of this layer is concatenated with the previous output of the -th MFB module, and fed into the second convolutional layer (i.e., Conv2 in Fig. 2). Finally, we obtain the output of the -th MFB module. Note that, when , there is no previous output , so we simply feed the output of the first convoulutional layer into the Conv2 layer. It is also worth noting that the fusion method allows the MFB module to adaptively weight the different feature representations from multiple modalities, which benefits from the above three fusion strategies.
For the fusion network, the size of all filters is , and the numbers of filters are and , and , and and for the three MFB modules, respectively. Batch normalization is conducted after each convolutional layer using the ReLu activation function.
Iii-C Multi-modal Synthesis Network
Once the multi-modal latent representation (i.e., the output of the last MFB module in the multi-modal fusion network) has been obtained, we can use it to synthesize a target-modality image via a GAN model. Similar to the pixel-to-pixel synthesis method , our generator tries to generate an image from the input , while the discriminator tries to distinguish the generated image from the real image . Accordingly, the objective function of the generator can be formulated as:
where is a nonnegative trade-off parameter. The generator tries to generate a realistic image that misleads in the first term of Eq. (3), and an -norm is used to measure the difference between the generated image and the corresponding real image in the second term. Because we integrate multi-modal learning and image synthesis into a unified framework, the generator can be reformulated as:
Additionally, the objective function of the discriminator can be formulated as:
Finally, an end-to-end multi-modal synthesis framework can be formulated with the following objective:
where is a trade-off parameter.
The detailed architecture of the generator is shown in Fig. 1. Specifically, we first feed the latent representation into two convolutional layers with and filters of size , respectively, and then the output is further fed into three MFB modules. Note that we also use the MFB modules to fuse the latent representation and the feature representations from the encoding layers of each modality-specific network using a skip connection. Then, we feed the output of the last MFB module into an upsampling layer and two convolutional layers (with a filter size of and number of filters and , respectively). Batch normalization is also conducted after each convolutional layer, using a ReLu activation function.
Additionally, the discriminator takes either a real target modality image or a synthesized one as input, and aims to determine whether or not it is real. The input of the discriminator is a 2D image (e.g., in our experiments), which has the same size as the generator’s output. The architecture of the discriminator includes four convolutional layers, as it is defined as , where BN denotes batch normalization. For the first four convolutional layers, we use filters with stride 2. Besides, all LeakyReLu activations are with slope of 0.2.
Iv Experiments and Results
In this section, we describe our experimental settings, including the dataset, comparison methods, evaluation metrics, and implementation details. We present comparison results, results for the ablation study, and some related discussion.
To validate the effectiveness of our model, we use the multimodal brain tumor segmentation challenge 2018 (BraTs2018) dataset . This dataset consists of 285 patients with multiple MR scans acquired from 19 different institutions and includes glioblastoma (GBM) and lower grade glioma (LGG) cohorts. The patient scans contain four modalities of co-registered MR volumes: , , , and , where each modality volume is of size . In this study, we use , and images to verify the effectiveness of our proposed synthesis method. Our architecture uses 2D axial-plane slices of the volumes. For a 2D slice (), we crop out an image of size from the center region. Besides, we randomly split the 285 subjects to 80% for training and 20% for testing. To increase the number of training samples, we split each cropped image () into four overlapping patches of size , and the overlapped regions are averaged to form the final estimation. For each volume, we linearly scale the original intensity values to .
Iv-B Comparison Methods and Evaluation Metrics
To verify the effectiveness of the proposed synthesis method, we compare it with three state-of-the-art cross-modality synthesis methods, Pix2pix , cycleGAN  and MM-Syns . These methods can be summarized as follows: 1) Pix2pix . This method synthesizes a whole image by focusing on maintaining the pixel-wise intensity similarity; 2) cycleGAN . This method uses a cycle consistency loss to enable training without the need for paired data. In our comparison, we use the paired data to synthesize medical images from one modality to another; and 3) MM-Syns . This method first learns a common representation for multi-modal data and then synthesizes an MR image slice-by-slice under the constraint of pixel-wise intensity difference.
To quantitatively evaluate the synthesis performance, three popular metrics 
are adopted in this study: 1) Peak Signal-to-Noise Ratio (PSNR). Given a ground-truth imageand a generated image , PSNR is defined as , where is the total number of voxels in each image, and is the maximal intensity value of the ground-truth image and the generated image ; 2) Normalized Mean Squared Error (NMSE). This can be defined as ; and 3) Structural Similarity Index Measurement (SSIM). This is defined as: , where , , , and
are the means and variances of imageand , and is the covariance of and . The positive constants and are used to avoid a null denominator. Note that a higher PSNR value, lower NMSE value, and higher SSIM value indicate higher quality in the synthesized image.
Iv-C Implementation Details
All networks are trained using the Adam solver. We conduct 300 epochs to train the proposed model. The original learning rate is set to 0.0002 for the first 100 epochs and then linearly decays to 0 over the remaining epochs. During training, the trade-off parametersand are set to and
, respectively. The code is implemented using the PyTorch library.
Iv-D Results Comparison
We evaluate the proposed model for three tasks, i.e., on on BraTs2018 dataset, using and to synthesize the modality (), using and to synthesize the modality (), and using and to synthesize the modality (). We first evaluate the performance for synthesizing modality images. Table I shows the quantitative evaluation results for this task. From Table I, as it can be seen that our method outperforms all comparison methods in all three metrics (PSNR, NMSE, and SSIM). This suggests that our model can effectively fuse the complementary information from different modalities and that this is beneficial to the synthesis performance. Fig. 3 shows qualitative comparisons between the proposed synthesis method and other state-of-the-art methods in the synthesis task. As can be seen, our method achieves much better synthesis results. Specifically, our model can effectively synthesize tumor regions (as shown in the first row of Fig. 3), while Pix2pix and cycleGAN either only synthesize some blurry tumor regions or fail to synthesize them. Similar observations can be found in the second and third rows of Fig. 3. Overall, our method achieves much better synthesis results for the modality images, as demonstrated using both qualitative and quantitative measures.
For the second task (i.e., using and as inputs to synthesize images), Table II and Fig. 4 show the quantitative and qualitative comparison results for different methods, respectively. From Table II and Fig. 4, as can be observed that our model outperforms other comparison synthesis methods. Additionally, in Fig. 4, we can see that our method synthesizes better-quality target images compared to other methods. We have similar observations for the results shown in Table III and Fig. 5, where our method obtains the best performance for the third task (i.e., using and as inputs to synthesize images) in terms of all evaluation metrics. It is also worth specifically noting that our method outperforms the MM-Syns method, which also uses two modalities to synthesize the target images, by using late fusion to learn the common representation and then reconstructs the target modality without using a GAN model. In contrast, our method presents a hybrid fusion network that can effectively explore complementary information from multiple modalities as well as exploit their correlations to improve synthesis performance.
We have shown the comparison synthesis results in the axial plane, as slices are fed one-by-one into our synthesis network. To further validate the effectiveness of our model in the sagittal and coronal planes, we show the results for these in Fig. 6 within using different planes. From Fig. 6, it can be seen that our method still performs better than other methods and is able to synthesize high-quality target images.
In addition, we also evaluate the performance for synthesizing modality images using and images on the ischemic stroke lesion segmentation challenge 2015 (ISLES2015) dataset . This dataset consists of multi-spectral MR images. In this study, we choose the sub-acute ischemic stroke lesion segmentation (SISS) cohort of patients. Each case consists of four sequences namely , , and , and are rigidly co-registered to the sequence. More details about the preprocessing steps can be found in . For a 2D slice (), we crop out an image of size from the center region, and we also split each cropped image () into four overlapping patches of size . Besides, we use 28 training cases and 17 testing cases in this study. For each volume, we also linearly scale the original intensity values to . Table IV shows the quantitative evaluation results for this task. From Table IV, as it can be seen that our method outperforms all comparison methods in all three metrics. Fig. 7 shows qualitative comparisons between the proposed synthesis method and other state-of-the-art methods in the synthesis task. As can be seen, our method achieves much better synthesis results.
Iv-E Ablation Study
The proposed synthesis method (Hi-Net) consists of several key components, we conduct the following ablation studies to verify the importance of each one. First, our model utilizes a novel MFB module to fuse multiple modalities or inputs. Conveniently, we define a direct concatenated strategy for fusing multiple modalities (denoted as “ConcateFusion”) as shown in Fig. 8. To evaluate the effectiveness of the MFB module, we compare our full model with three degraded versions as follows: (1) We use the “ConcateFusion” strategy both in the fusion network and generator network, denoted as “Ours-degraded1”; (2) We use MFB modules in the fusion network and the “ConcateFusion” in the generator network, denoted as “Ours-degraded2”; and (3) We use the “ConcateFusion” in the fusion network and MFB modules in the generator network, denoted as “Ours-degraded3”. Second, our model also employs a hybrid fusion strategy to fuse the various modalities. Thus, we compare the proposed Hi-Net with models using an early fusion strategy (as shown in Fig. 9(a)) or a late fusion strategy (as shown in Fig. 9(b)). We denote the above two strategies as “Ours-earlyFusion” and “Ours-lateFusion”, respectively. Note that the skip connection components are included in both fusion strategies, even if they are not shown in Fig. 9.
Table V shows quantitative evaluation results for the synthesized modality images using and when comparing our full model with its ablated versions. Compared with “Ours-degraded1”, it can be seen that our model with using the MFB module effectively improves the synthesis performance. This is because the proposed MFB module adaptively weights the different fusion strategies. Besides, when comparing “Ours-degraded2” with “Ours-degraded3”, the results indicate that our model performs better when only using MFB modules in the fusion network than that only using MFB modules in the generator network. Further, compared with early and late fusions, the results demonstrate that our proposed hybrid fusion network performs better than both. This is mainly because our Hi-Net exploits the correlations among multiple modalities using the fusion network, and simultaneously preserves the modality-specific properties using the modality-specific network, resulting in better synthesis performance.
In contrast to most existing methods, which focus on the single-input to single-output task [14, 27, 39, 38, 6, 43], our proposed model fuses multi-modal data (e. g., two modalities as input in our current study) to synthesize the missing modality images. It is worth noting that our method can effectively gather more information from the different modalities to improve the synthesis performance compared to using only a single modality as input. This is mainly because the multi-modal data can provide complementary information and exploit the properties of each modality.
Besides, our proposed Hi-Net consists of two modality-specific networks and one fusion network, where the modality-specific networks aim to preserve the modality-specific properties and the fusion network aims to exploit the correlations among multiple modalities. In the multi-view learning field, several studies focus on learning a common latent representation to exploit the correlations among multiple views [49, 44], while other methods explore complementary information. However, both of these are important for multi-view/modal learning [52, 9, 47]. For the proposed model, we have considered both aspects to improve fusion performance.
For our proposed model, one potential application is to first synthesize missing modality images and then use them to achieve a specific task. For example, in tumor segmentation and overall survival time prediction tasks, there exist different modality MR images, but it is common to have several missing modalities in clinical applications . Thus, our framework can effectively synthesize the missing modality images and then perform multi-modal segmentation and overall survival prediction tasks using existing methods. Besides, it is widely known that a large amount of data is critical to success when training deep learning models. In practice, it is often difficult to collect enough training data, especially for a new imaging modality not well established in clinical practice yet. What’s more, data with high-class imbalance or insufficient variability  often results in poor classification performance. Thus, our model can synthesize more multi-modal images, which can be regarded as supplementary training data to boost the generalization capability of current deep learning models.
In this paper, we have proposed a novel end-to-end hybrid-fusion network for multi-modal MR image synthesis. Specifically, our method explores the modality-specific properties within each modality, and simultaneously exploits the correlations across multiple modalities. Besides, we have proposed a layer-wise fusion strategy to effectively fuse multiple modalities within different feature layers. Moreover, an MFB module is presented to adaptively weight different fusion strategies. The experimental results in multiple synthesis tasks have demonstrated that our proposed model outperforms other state-of-the-art synthesis methods in both quantitative and qualitative measures. In the future, we will validate whether the synthetic images as a form of data augmentation can boost the multi-modal learning performance.
-  (2014) Attenuation correction synthesis for hybrid PET-MR scanners: application to brain studies. IEEE Trans. Med. Imag. 33 (12), pp. 2332–2341. Cited by: §II.
-  (2017) Multimodal MR synthesis via modality-invariant latent representation. IEEE Trans. Med. Imag. 37 (3), pp. 803–814. Cited by: §IV-B.
-  (2015) Generalized K-fan multimodal deep model with shared representations. arXiv preprint arXiv:1503.07906. Cited by: §I.
-  (2017) Towards adversarial retinal image synthesis. arXiv:1701.08974. Cited by: §II.
-  (2017) End-to-end adversarial retinal image synthesis. IEEE Trans. Med. Imag. 37 (3), pp. 781–791. Cited by: §II.
-  (2019) Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imag. 38 (10), pp. 2375–2388. Cited by: §I, §II, §IV-F.
-  (2014) Learning a deep convolutional network for image super-resolution. In European Conference on Computer vVsion, pp. 184–199. Cited by: §I, §II.
-  (2019) Adversarial learning for mono-or multi-modal registration. Medical Image Analysis 58, pp. 101545. Cited by: §I.
-  (2017) Exploring commonality and individuality for multi-modal curriculum learning. In Thirty-First AAAI Conference on Artificial Intelligence, Cited by: §IV-F.
-  (2014) Generative adversarial nets. In Proc. Adv. Neural Inf. Process. Syst.,, pp. 2672–2680. Cited by: §II.
-  (2017) Synthetic medical images from dual generative adversarial networks. arXiv:1709.01872. Cited by: §II.
-  (2004) Canonical correlation analysis: an overview with application to learning methods. Neural Computation 16 (12), pp. 2639–2664. Cited by: §II.
-  (2019) Deep multimodal multilinear fusion with high-order polynomial pooling. In Proc. Adv. Neural Inf. Process. Syst.,, pp. 12113–12122. Cited by: §II.
Simultaneous super-resolution and cross-modality synthesis of 3D medical images using weakly-supervised joint convolutional sparse coding.
Proc. Comput. Vis. Pattern Recognit., Cited by: §I, §II, §IV-F.
-  (2017) Cross-modality image synthesis via weakly coupled and geometry co-regularized joint dictionary learning. IEEE Trans. Med. Imag. 37 (3), pp. 815–827. Cited by: §I, §I, §II.
Image-to-image translation with conditional adversarial networks. In Proc. Comput. Vis. Pattern Recognit., pp. 1125–1134. Cited by: §II, §III-C, §IV-B.
-  (2013) Magnetic resonance image synthesis through patch regression. In IEEE 10th International Symposium on Biomedical Imaging, pp. 350–353. Cited by: §I, §II.
-  (2014) Deep learning based imaging data completion for improved brain disease diagnosis. In Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 305–312. Cited by: §II.
-  (2010) Multiple kernel learning for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (6), pp. 1147–1160. Cited by: §II.
-  (2018) A digital 3D atlas of the marmoset brain based on multi-modal MRI. Neuroimage 169, pp. 106–116. Cited by: §I.
Concatenated and connected random forests with multiscale patch driven active contour model for automated brain tumor segmentation of MR images. IEEE Trans. Med. Imag. 37 (8), pp. 1943–1954. Cited by: §I.
-  (2017) ISLES 2015-a public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI. Medical Image Analysis 35, pp. 250–269. Cited by: §IV-D.
-  (2004) The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imag. 34 (10), pp. 1993–2024. Cited by: §IV-A.
-  (1993) Mathematical textbook of deformable neuroanatomies. Proceedings of the National Academy of Sciences 90 (24), pp. 11944–11948. Cited by: §II.
-  (2014) Conditional generative adversarial nets. In arXiv preprint arXiv:1411.1784, Cited by: §II.
-  (2011) Multimodal deep learning. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 689–696. Cited by: §I.
-  (2017) Medical image synthesis with context-aware generative adversarial networks. In Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 417–425. Cited by: §I, §I, §II, §IV-F.
-  (2016) 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. In Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 212–220. Cited by: §II.
-  (2018) Generative adversarial training for mra image synthesis using multi-contrast MRI. In International Workshop on PRedictive Intelligence In MEdicine, pp. 147–154. Cited by: §I.
-  (2016) Cross-media shared representation by hierarchical learning with multiple deep networks.. In International Joint Conference on Artificial Intelligence, pp. 3846–3853. Cited by: §I.
-  (2013) Magnetic resonance image example-based contrast synthesis. IEEE Trans. Med. Imag. 32 (12), pp. 2348–2363. Cited by: §II.
-  (2019) Brain tumor segmentation on MRI with missing modalities. In International Conference on Information Processing in Medical Imaging, pp. 417–428. Cited by: §I, §IV-F.
-  (2018) Medical image synthesis for data augmentation and anonymization using generative adversarial networks. In International Workshop on Simulation and Synthesis in Medical Imaging, pp. 1–11. Cited by: §IV-F.
-  (2018) Simulating patho-realistic ultrasound images using deep generative networks with adversarial learning. In International Symposium on Biomedical Imaging, pp. 1174–1177. Cited by: §II.
-  (2017) Multi-modal classification of Alzheimer’s disease using nonlinear graph fusion. Pattern recognition 63, pp. 171–181. Cited by: §I.
-  (2016) Fast patch-based pseudo-CT synthesis from T1-weighted MR images for PET/MR attenuation correction in brain studies. Journal of Nuclear Medicine 57 (1), pp. 136–143. Cited by: §I, §II.
-  (2015) Large-margin multi-modal deep learning for RGB-D object recognition. IEEE Transactions on Multimedia 17 (11), pp. 1887–1898. Cited by: §II.
-  (2018) 3D auto-context-based locality adaptive multi-modality GANs for PET synthesis. IEEE Trans. Med. Imag. 38 (6), pp. 1328–1339. Cited by: §I, §I, §II, §IV-F.
-  (2017) Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans. Med. Imag. 36 (12), pp. 2536–2545. Cited by: §I, §II, §IV-F.
-  (2019) Bi-modality medical image synthesis using semi-supervised sequential generative adversarial networks. IEEE Journal of Biomedical and Health Informatics. Cited by: §I.
Shared representation learning for heterogenous face recognition. In Proc. Int. Conf. and Workshops on Automatic Face and Gesture Recognit., Vol. 1, pp. 1–7. Cited by: §I.
-  (2018) Generative adversarial network in medical imaging: a review. arXiv preprint arXiv:1809.07294. Cited by: §II.
-  (2019) Ea-gans: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans. Med. Imag.. Cited by: §II, §IV-B, §IV-F.
-  (2018) Generalized latent multi-view subspace clustering. IEEE Trans. Pattern Anal. Mach. Intell. 42 (1), pp. 86–99. Cited by: §IV-F.
-  (2019) SkrGAN: Sketching-Rendering Unconditional Generative Adversarial Networks for Medical Image Synthesis. In Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 777–785. Cited by: §II.
-  (2018) Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network. In Proc. Comput. Vis. Pattern Recognit., pp. 9242–9251. Cited by: §I, §II.
Group component analysis for multiblock data: common and individual feature extraction. IEEE Trans. Neural. Netw. Learn. Syst. 27 (11), pp. 2426–2439. Cited by: §IV-F.
-  (2019) Deep multi-modal latent representation learning for automated dementia diagnosis. In Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 629–638. Cited by: §I.
Multi-modal latent space inducing ensemble svm classifier for early dementia diagnosis with neuroimaging data. Medical Image Analysis 60, pp. 101630. Cited by: §IV-F.
-  (2019) Inter-modality dependence induced data recovery for mci conversion prediction. In Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent., pp. 186–195. Cited by: §I.
-  (2019) Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Human brain mapping 40 (3), pp. 1001–1016. Cited by: §II.
-  (2019) Dual shared-specific multiview subspace clustering.. IEEE Trans. Cybern.. Cited by: §IV-F.
-  (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. Int. Conf. Comput. Vis., pp. 2223–2232. Cited by: §II, §IV-B.
-  (2016) Subspace regularized sparse multitask learning for multiclass neurodegenerative disease identification. IEEE Trans. Biomed. Eng. 63 (3), pp. 607–618. Cited by: §I.
A novel relational regularization feature selection method for joint regression and classification in AD diagnosis. Medical Image Analysis 38, pp. 205–214. Cited by: §I.