Convolutional Neural Networks with Intermediate Loss for 3D Super-Resolution of CT and MRI Scans

01/05/2020
by   Mariana-Iuliana Georgescu, et al.
40

CT scanners that are commonly-used in hospitals nowadays produce low-resolution images, up to 512 pixels in size. One pixel in the image corresponds to a one millimeter piece of tissue. In order to accurately segment tumors and make treatment plans, doctors need CT scans of higher resolution. The same problem appears in MRI. In this paper, we propose an approach for the single-image super-resolution of 3D CT or MRI scans. Our method is based on deep convolutional neural networks (CNNs) composed of 10 convolutional layers and an intermediate upscaling layer that is placed after the first 6 convolutional layers. Our first CNN, which increases the resolution on two axes (width and height), is followed by a second CNN, which increases the resolution on the third axis (depth). Different from other methods, we compute the loss with respect to the ground-truth high-resolution output right after the upscaling layer, in addition to computing the loss after the last convolutional layer. The intermediate loss forces our network to produce a better output, closer to the ground-truth. A widely-used approach to obtain sharp results is to add Gaussian blur using a fixed standard deviation. In order to avoid overfitting to a fixed standard deviation, we apply Gaussian smoothing with various standard deviations, unlike other approaches. We evaluate our method in the context of 2D and 3D super-resolution of CT and MRI scans from two databases, comparing it to relevant related works from the literature and baselines based on various interpolation schemes, using 2x and 4x scaling factors. The empirical results show that our approach attains superior results to all other methods. Moreover, our human annotation study reveals that both doctors and regular annotators chose our method in favor of Lanczos interpolation in 97.55 4x upscaling factor.

READ FULL TEXT VIEW PDF

Authors

page 1

page 4

page 8

page 10

01/15/2022

SDT-DCSCN for Simultaneous Super-Resolution and Deblurring of Text Images

Deep convolutional neural networks (Deep CNN) have achieved hopeful perf...
08/23/2019

DRFN: Deep Recurrent Fusion Network for Single-Image Super-Resolution with Large Factors

Recently, single-image super-resolution has made great progress owing to...
11/04/2020

Noise Reduction to Compute Tissue Mineral Density and Trabecular Bone Volume Fraction from Low Resolution QCT

We propose a 3D neural network with specific loss functions for quantita...
11/11/2020

Invertible CNN-Based Super Resolution with Downsampling Awareness

Single image super resolution involves artificially increasing the resol...
03/13/2017

GUN: Gradual Upsampling Network for single image super-resolution

In this paper, we propose an efficient super-resolution (SR) method base...
02/27/2019

Generative Collaborative Networks for Single Image Super-Resolution

A common issue of deep neural networks-based methods for the problem of ...
04/08/2022

Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes for Medical Image Super-Resolution

Super-resolving medical images can help physicians in providing more acc...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Medical centers and hospitals around the globe are typically equipped with single-energy Computer Tomography (CT) or Magnetic Resonance Imaging (MRI) scanners that produce cross-sectional images (slices) of various body parts. The resulting images are of low-resolution (typically around 256 pixels) and one pixel usually corresponds to a one millimeter piece of tissue. The thickness of one slice is also one millimeter, so the 3D CT images are composed of volumetric pixels (voxels) that correspond to one cubic millimeter (

) of tissue. One of the main benefits of this non-invasive scanning technique is that it allows doctors to see if there are malignant tumors inside the body. Nevertheless, doctors, and even machine learning systems 

[1], are not able to accurately contour (segment) the tumor regions because of the low-resolution of CT or MRI scans. According to a team of radiologists from Colțea Hospital in Bucharest, that provided a set of anonymized CT scans for our experiments, the desired resolution is to have one voxel correspond to one cubic micrometer (a thousandth part of a cubic millimeter) of tissue. In other words, the goal is to increase the resolution of 3D CT and MRI scans by a factor of in each direction.

Fig. 1: Our method for 3D image super-resolution based on two subsequent fully-convolutional neural networks. In the first stage, the input volume is resized in two dimensions (width and height). In the second stage, the processed volume is further resized in the third dimension (depth). Using a scale factor of , an input volume of components is upsampled to components (on all axes). Best viewed in color.

The main motivation behind our work is to allow radiologists and oncologists to accurately segment tumors and make better treatment plans. In order to achieve the desired goal, we propose a machine learning method that takes as input a 3D image and increases the resolution of the input image by a factor of or , providing as output a high-resolution 3D image. To our knowledge, there are only a few previous works [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18] that study the super-resolution of CT or MRI images. Similar to some of these previous works [2, 3, 4, 5, 6, 7, 9, 11, 12, 13, 14, 15, 16, 17, 18], we approach single-image super-resolution of CT and MRI scans using deep convolutional neural networks (CNNs). CNN models rely on end-to-end training from large amounts of data in order to achieve state-of-the-art results. We propose a CNN architecture composed of 10 convolutional layers and an intermediate sub-pixel convolutional (upscaling) layer [19] that is placed after the first 6 convolutional layers. Different from related works [3, 6, 16, 18] that use the sub-pixel convolutional layer of Shi et al. [19], we add 4 convolutional layers after the upscaling layer. In order to obtain 3D super-resolution, we employ two CNNs with similar architectures, as illustrated in Figure 1. The first CNN increases the resolution on two axes (width and height), while the second CNN takes the output from the first CNN and further increases the resolution on the third axis (depth), thus increasing the resolution on all three axes. Different from related methods [3, 6, 16, 18], we compute the loss with respect to the ground-truth high-resolution output right after the upscaling layer, in addition to computing the loss after the last convolutional layer. The intermediate loss forces our network to produce a better output, closer to the ground-truth. In order to improve the results and obtain sharper images, a common approach is to apply Gaussian smoothing on top of the input images, using a fixed standard deviation. Different for other medical image super-resolution methods [3, 14, 17], we use various standard deviations in order to avoid overfitting to a certain standard deviation and improve the generalization capacity of our model.

We conduct super-resolution experiments on two databases of 3D CT and MRI images, one gathered from the Colțea Hospital (CH) and one that is publicly available online, known as NAMIC111Available at http://hdl.handle.net/1926/1687.. We compare our method with several interpolation baselines (nearest, bilinear, bicubic, Lanczos) and state-of-the-art methods [3, 13, 17]

, in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). We perform comparative experiments on both 2D and 3D single-image super-resolution under commonly-used upscaling factors, namely

and . The empirical results indicate that our approach is able to surpass all the other methods included in the experiments. For example, on the NAMIC data set, we obtain a PSNR of and an SSIM of for 3D super-resolution by a factor of , while Pham et al. [13] reported a PSNR of and an SSIM of in the same setting. Furthermore, we conduct a human evaluation study, asking 6 doctors and 12 regular annotators to choose between the CT images produced by our method and those produced by Lanczos interpolation (the best interpolation method). The annotators opted for our method in favor of Lanczos interpolation in cases for an upscaling factor of and in cases for an upscaling factor of . These results indicate that our method is significantly better than Lanczos interpolation. To our knowledge, we are the first to conduct a human evaluation study on the super-resolution of CT scans. In order to allow further developments and results replication, we provide our code as open source in a public repository222Available at https://github.com/lilygeorgescu/3d-super-res-cnn..

To summarize, our contribution is threefold:

  • We propose a novel CNN model for 3D super-resolution of CT and MRI scans, which is based on an intermediate loss added to the standard output loss and on smoothing the input using random standard deviations for the Gaussian blur.

  • We conduct a human evaluation study to determine the quality and the utility of our super-resolution results.

  • We provide our code online for download, allowing our results to be easily replicated.

We organize the rest of this paper as follows. We present related art in Section II. We describe our method in detail in Section III. We present experiments and results in Section IV. Finally, we draw our conclusions in Section V.

Ii Related Work

The purpose of single-image super-resolution (SISR) is to reconstruct a high-resolution (HR) image from its low-resolution (LR) counterpart. Before the deep learning age, researchers have used exemplar-based or sparse coding methods for SISR. Exemplar-based methods learn mapping functions from external LR and HR exemplar pairs

[20, 21, 22]. Sparse coding methods [23] are representative for external exemplar-based SR methods. For example, the method of Yang et al. [23] builds a dictionary with LR patches and the corresponding HR patches.

To our knowledge, the first work to present a deep learning approach for SISR is [24]. Dong et al. [24] proposed a CNN composed of 8 convolutional layers. The network was trained in an end-to-end fashion, minimizing the reconstruction error between the HR image and the output of the network. They used bicubic interpolation to resize the image, before giving it as input to the network. Hence, the CNN takes a blurred HR image as input and learns how to make it sharper. Since the input is a HR image, this type of CNN is time consuming. Therefore, Shi et al. [19] introduced new a method for upsampling the image using the CNN activation maps produced by the last layer. Their network is more efficient, because it builds the HR image only at the very end. Other works, such as [25], proposed deeper architectures, focusing strictly on accuracy. Indeed, Zhang et al. [25] presented one of the deepest CNN used for SR, composed of 400 layers. They used a channel attention mechanism and residual blocks to handle the depth of the network.

For medical SISR, some researchers have focused on sparse representations [8, 10], while others on training convolutional neural networks [1, 2, 3, 4, 5, 6, 7, 9, 11, 12, 14, 15, 16, 18].

The authors of [8] proposed a weakly-supervised joint convolutional sparse coding method to simultaneously solve the problems of super-resolution and cross-modal image synthesis. In [10], the authors adopted a method based on compressed sensing and self-similarity constraint, obtaining better results than [17] in terms of SSIM and PSNR.

Some works [1, 3, 5, 6, 9, 11, 10, 14, 15, 16, 18] focused on 2D upsampling, i.e. on increasing the width and height of CT/MRI slices, while other works [2, 4, 8, 12] focused on 3D upsampling, i.e. on increasing the resolution of full 3D CT/MRI scans on all three axes (width, height and depth).

For 2D upsampling, some works [1, 5, 9, 14] used interpolated low resolution (ILR) images, while other works [3, 6, 16, 18] used the efficient sub-pixel convolutional neural network (ESPCN) introduced in [19]. Similar to the latter approaches [3, 6, 16, 18], we employ the sub-pixel convolutional layer of Shi et al. [19]. Different from these related works [3, 6, 16, 18]

, we add a convolutional block after the sub-pixel convolutional layer, in order to enhance the HR output image. Furthermore, we propose a novel loss function for our CNN model. Instead of computing the loss between the output image and the ground-truth high-resolution image, we also compute the loss between the intermediate image given by the sub-pixel convolutional layer and the high-resolution image. This forces our neural network to learn a better intermediate representation, increasing its performance.

There are some works [2, 11, 15] that employed generative adversarial networks (GANs) [26] to upsample medical images. Our approach based on fully-convolutional neural networks is less related to GAN-based SISR methods.

For 3D upsampling, Chen et al. [2] trained a CNN with 3D convolutions and used a GAN–based loss function to produce sharper and more realistic images. In order to upsample a 3D image, Du et al. [4] employed a deconvolutional layer composed of 3D filters to upsample the LR image, in an attempt to reduce the computational complexity. As [2, 4, 8, 12], we tackle the problem of 3D CT/MRI image super-resolution. However, instead of using inefficient 3D filters to upsample the LR images in a single step, we propose a two-stage approach that uses efficient 2D filters. Our approach employs a CNN to increase the resolution in width and height, and another CNN to further increase the resolution depth-wise.

Most SISR works [3, 14, 17], apply Gaussian smoothing using a fixed standard deviation on the training images, thus training the models in more difficult conditions. However, we believe that using a fixed standard deviation can harm the performance, as deep models tend to overfit to the training data. Different from the standard methodology, each time we apply smoothing on a training image, we chose a different standard deviation, randomly. This simple change improves the generalization capacity of our model, yielding better performance at test time.

While many works focus only on the super-resolution task, the work of Sert et al. [1] is focused on the gain brought by the upsampled images in solving a different task. Indeed, the authors [1] obtained an improvement of in the classification of segmented brain tumors when the upsampled images were used.

We note that there is also some effort in designing and obtaining CT scan results of higher resolution directly from CT scanners. For example, X-ray microtomography (micro-CT) [27], which is based on pixel sizes of the cross-sections in the micrometer range, has applications in medical imaging [28, 29], but such micro-CT scanners are far more expensive than standard CT scanners. Hence, the vast majority of hospitals rely on standard CT scanners which can provide a macroscopic image only.

Another alternative to standard (single-energy) CT, is dual-energy or multi-energy CT [30]. In dual-energy CT, an additional measurement is obtained with a second X-ray spectrum, allowing the differentiation of multiple materials that cannot be distinguished in single-energy CT. Although several studies present the benefits of dual energy CT [30, 31]

, it has remained underutilized over the past decade probably due to novelty of the medical methodology and higher costs than single-energy CT scanners.

Different from micro-CT or dual-energy CT, our focus is to increase the resolution of single-energy CT images using an algorithm based on machine learning.

Iii Method

Our approach for solving the 3D image super-resolution problem is divided in two stages, as illustrated in Figure 1. In the first stage, we upsample the image on height and width using a deep fully-convolutional neural network. Then, in the second stage, we further upsample the resulting image on the depth axis using another fully-convolutional neural network. Therefore, our complete method is designed for resizing the 3D input volume on all three axes. While the CNN used in the first stage resizes the image on two axes, the CNN used in the second stage resizes the image on a single axis. Both CNNs share the same architecture, the only difference being in the upsamling layer (the second CNN upsamples in only one direction). At training time, our CNNs operate on patches. However, since the architecture is fully-convolutional, the models can operate on entire slices at inference time.

We further describe in detail the proposed CNN architecture, loss function and data augmentation procedure.

Iii-a Architecture

Fig. 2: Our convolutional neural network for super-resolution on two axes, height and width. The network is composed of 10 convolutional layers and an upsampling (sub-pixel convolutional) layer. It takes as input low-resolution patches of pixels and, for the scale factor, it outputs high-resolution patches of pixels. The convolutional layers are represented by green arrows. The sub-pixel convolutional layer is represented by the red arrow. The long-skip and short-skip connections are represented by blue arrows. Best viewed in color.

Our architecture, depicted in Figure 2

and used for both CNNs, is composed of 10 convolutional layers (conv), each followed by Rectified Liner Units (ReLU

[32] activations. All convolutional layers contain filters with a spatial support of . The 10 conv layers are divided in two blocks. The first block, formed of the first 6 conv layers, starts with the input of the neural network and ends just before the upscaling layer. Each of the first 5 convolutional layers are formed of filters. For the CNN used in the first stage, the number of filters in the sixth convolutional layers is equal to the square of the scale factor, e.g. for scale factor of the number of filters is . For the CNN used in the second stage, the number of filters in the sixth convolutional layers is equal to the scale factor, e.g. for scale factor of the number of filters is . The difference is caused by the fact that the first CNN upscales on two axes, while the second CNN upscales on one axis. The first convolutional block contains a short-skip connection, from the first conv layer to the third conv layer, and a long-skip connection, from the first conv layer to the fifth conv layer.

Fig. 3: An example of low-resolution input activation maps and the corresponding high-resolution output activation map given by the sub-pixel convolutional layer for upscaling on two axes. For a scaling factor of in both directions, the sub-pixel convolutional layer requires activation maps as input. Best viewed in color.
Fig. 4: An example of low-resolution input activation maps and the corresponding high-resolution output activation map given by the sub-pixel convolutional layer for upscaling on one axis. For a scaling factor of in one direction, the sub-pixel convolutional layer requires activation maps as input. Best viewed in color.

The first convolutional block is followed by a sub-pixel convolutional (upscaling) layer, which was introduced in [19]. In the upscaling layer, the activation maps produced by the sixth conv layer are assembled into a single activation map. Throughout the first convolutional block, the spatial size of the low-resolution input is preserved, i.e. the activation maps of the sixth conv layer have components, where and are the height and the width of the input image . In order to increase the input

times on both axes, the output of the sixth conv layer must be a tensor of

components. The activation map resulting from the sub-pixel convolutional layer is a matrix of components. For super-resolution on two axes, the pixels are rearranged as shown in Figure 3. In a similar fashion, we can increase the input times on one axis. In this case, the output of the sixth conv layer must be a tensor of components. This time, the activation map resulting from the sub-pixel convolutional layer can be either a matrix of components or a matrix of components, depending on the direction we aim to increase the resolution. For super-resolution on one axis, the pixels are rearranged as shown in Figure 4. To our knowledge, we are the first to propose a sub-pixel convolutional (upscaling) layer for super-resolution in one direction.

When Shi et al. [19] introduced the sub-pixel convolutional layer, they used it as the last layer of their CNN. Hence, the output depth of the upscaling layer is equal to the number of channels in the output image. Since we are working with CT/MRI (grayscale) images, the output has a single channel. Different from Shi et al. [19], we employ further convolutions after the upscaling layer. In our architecture, the upscaling layer is followed by our second convolutional block, which starts with the seventh convolutional layer and ends with the tenth convolutional layer. The first three conv layers in our second block are formed of filters. The tenth conv layer contains a single convolutional filter, since our output is a grayscale image that has a single channel. The second convolutional block contains a short skip connection, from the seventh conv layer to the ninth conv layer. The spatial size of components of the activation maps is preserved throughout the second convolutional block, where and are the height and the width of the output image .

Iii-B Losses and Optimization

In order to obtain a CNN model for single-image super-resolution, the aim is to minimize the differences between the ground-truth high-resolution image and the output image provided by the CNN. Researchers typically employ the mean absolute difference as the objective function. Given a low-resolution input image and the corresponding high-resolution output image , the loss based on the mean absolute value is formally defined as follows:

(1)

where are the CNN parameters (weights), is the transformation function learned by the CNN, and and represent the width and the height of the output image , respectively.

When we train our CNN model, we do not employ the standard approach of minimizing the difference between the output provided by the CNN and the ground-truth HR image. Instead, we propose a novel approach based on an intermediate loss function. Since the conv layers after the upscaling layer are meant to refine the high-resolution image without taking any additional information from the low-resolution input image, we note that the high-resolution image resulted immediately after the upscaling layer should be as similar as possible to the ground-truth high-resolution image. Therefore, we propose a loss function that aims to minimize the difference between the intermediately-obtained HR image and the ground-truth HR image, in addition to minimizing the difference between the final HR image and the ground-truth HR image. Let denote the transformation function that corresponds to the first convolutional block and the upscaling layer, and let denote the transformation function that corresponds to the second convolutional block. With these notations, the transformation function of our full CNN architecture can be written as follows:

(2)

where are the parameters of the full CNN, are the parameters of the first convolutional block and are the parameters of the second convolutional block, i.e. is a concatenation of and . Having defined and as above, we can formally write our loss function as follows:

(3)

where is a parameter that controls the importance of the intermediate loss with respect to the standard loss, is the standard loss defined in Equation (1) and is defined as follows:

(4)

In the experiments, we set , since we did not find any strong reason to assign a lower or higher importance to the intermediate loss. By replacing with and the loss values from Equation (1) and Equation (4), Equation (3) becomes:

(5)

In order to optimize towards the objective defined in Equation (5), we employ the Adam optimizer [33]

, which is know to converge faster than Stochastic Gradient Descent.

Iii-C Data Augmentation

Fig. 5: Distribution of training samples (represented by green triangles) and test samples (represented by red circles), when the training samples are smoothed using a fixed standard deviation (left-hand side) versus using a randomly-chosen standard deviation (right-hand side). In example , overfitting on the training data leads to poor results on test data. In example , the danger of overfitting is diminished because the test distribution in included in the training distribution. Best viewed in color.

A common approach to force the CNN to produce sharper output images is to apply Gaussian smoothing using a fixed standard deviation at training time [3, 14, 17]. By training the CNN on blurred low-resolution images, it will have an easier task during inference, when the input images are no longer blurred. However, smoothing only the training images will inherently generate a distribution gap between training and testing data. If the CNN overfits to the training distribution, it might not produce the desired results at inference time. We propose to solve this problem by using a randomly-chosen standard deviation for each training image. Although the training data distribution will still be different from the testing data distribution, it will include the distribution of test samples, as illustrated in Figure 5. In order to augment the training data, we apply Gaussian blur with a probability of (only half of the images are smoothed) using a kernel of components and a randomly-chosen standard deviation between and .

Iv Experiments

Iv-a Data Sets

The first data set used in the experiments consists of anonymized 3D images of brain CT provided by the Medical School at Colțea Hospital. We further refer to this data set as the Colțea Hospital (CH) data set. In order to fairly train and test our CNN models and baselines, we randomly selected images for training and used the remaining images for testing. The training set has slices (2D images) and the testing set has slices. The height and the width of the slices vary between and pixels, while the depth of the 3D images varies between and slices. The resolution of a voxel is .

The second data set used in our experiments is the National Alliance for Medical Image Computing (NAMIC) Brain Multimodality data set. The NAMIC data set consists of 3D MRI images, each composed of slices of pixels. As for the CH data set, the resolution of a voxel is . For our experiments, we used T1-weighted (T1w) and T2-weighted (T2w) images independently. Following [13], we split the NAMIC data set into a training set containing 3D images and a test set containing the other images. We kept the same split for T1w and T2w.

Iv-B Experimental Setup

Iv-B1 Evaluation Metrics

As most previous works [3, 5, 10, 12, 13, 14, 15, 16, 17, 18]

, we employ two evaluation metrics, namely the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). The PSNR is the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Although the PSNR is one of the most used metrics for image reconstruction, some researchers 

[34, 35] argued that it is not highly indicative of the perceived similarity. The SSIM [34] aims to address this shortcoming by taking contrast, luminance and texture into account. The result of the SSIM is a number between -1 and 1, where a value of 1 means the ground-truth image and the reconstructed image are identical. Similarly, a higher PSNR value indicates a better reconstruction, although the PSNR does not have an upper bound.

Iv-B2 Human Evaluation

Because the above metrics rely only on the pixel values and not on the perceived visual quality, we decided to evaluate our method using human annotators. Although a deep learning method can provide better PSNR and SSIM values, it might produce artifacts that could be misleading for right diagnostics and treatment. We thus have to make sure that our approach does not produce any unwanted artifacts visible to humans. We conducted the human evaluation study on the CH data set, testing our CNN-based method against Lanczos interpolation. We used CT slices extracted from high-resolution 3D images resulted after applying super-resolution on all three axes. For each upsampling factor, and , we extracted 100 CT slices at random from the test set. Hence, each human evaluator had to annotate 200 image pairs (100 for each upsampling factor). For each evaluation sample, an annotator would see the original image in the middle and the two reconstructed images on its sides, one on the left side and the other on the right side. The annotators had a magnifying glass tool that allowed them to look at details and discover artifacts. The locations (left or right) of the images reconstructed by our CNN and by Lanczos interpolation were randomly picked every time. To prevent any form of cheating, the randomly picked locations were unknown to the annotators. For each test sample, we asked each annotator to select the image that best reconstructed the original image. Our experiment was completed by 18 human annotators, 6 of them being doctors specialized in radiotherapy and oncology. In total, we collected annotations (18 annotators 200 samples).

Iv-B3 Baselines

We compare our method with standard resizing methods based on various interpolation schemes, namely nearest neighbors, bilinear, bicubic and Lanczos. In addition to these baselines, we compare with two methods [3, 17] that focused on 2D SISR and one method [13] that focused on 3D SISR.

Number of filters Input size SSIM PSNR Time (in seconds)
-
0.05
-
-
0.08
0.16
TABLE I: Preliminary 2D super-resolution results on the CH data set for an upscaling factor of . The PSNR and the SSIM values are reported for various patch sizes and different numbers of filters. For models with patches, we report the inference time (in seconds) per CT slice measured on an Nvidia GeForce 940MX GPU with 2GB of RAM.

Iv-C Parameter Tuning and Preliminary Results

We conduct a series of preliminary experiments to determine the optimal patch size as well as the optimal width (number of convolutional filters) for our CNN. In order to find the optimal patch size, we tried out patches of , , and pixels. In term of the number of filters, we tried out values in the set for all conv layers in our network. These parameters were tuned in the context of 2D super-resolution on the CH data set. The corresponding results are presented in Table I. First of all, we note that our method produces better SSIM and PSNR values, i.e. and , for patches of pixels. Second of all, we observe that adding more filters on the conv layers slightly increases the SSIM and PSNR values. However, the gains in terms of SSIM and PSNR come with a great cost in terms of time. For example, using filters on each conv layer triples the processing time in comparison with using filters on each conv layer. For the subsequent experiments, we thus opt for patches of pixels and conv layers with filters.

We believe that it is important to note that, although the number of training slices is typically in the range of a few hundreds, the number of training patches is typically in the range of hundreds of thousands. For instance, the number of training patches extracted from the CH data set for the upscaling factor is . We thus stress out that the number of training samples is high enough to train highly-accurate deep learning models.

During training, we used mini-batches of images throughout all the experiments. In a set of preliminary experiments, we did not observe any significant differences when using mini-batches of or images. In each experiment, we train the CNN for epochs, starting with a learning rate of and decreasing the learning rate to after the first epochs.

Iv-D Ablation Study Results

Second conv block Intermediate loss Short-skip connections Long-skip connection Variable standard deviation SSIM PSNR


TABLE II: Ablation 2D super-resolution results on the CH data set for an upscaling factor of . The PSNR and the SSIM values are reported various ablated versions of our CNN model. The best results are highlighted in bold.

We perform an ablation study to emphasize the effect of various components over the overall performance. The ablation results obtained on the CH data set for super-resolution on height and width by a factor of are presented in Table II.

In our first ablation experiment, we have eliminated all the enhancements in order to show the performance level of a baseline CNN on the CH data set. Since there are several SISR works [3, 6, 16, 18] based on the standard ESPCN model [19], we have eliminated the second convolutional block in the second ablation experiment, transforming our architecture into a standard ESPCN architecture. The performance drops from to in terms of SSIM and from to in terms of PSNR. In the subsequent ablation experiments, we have removed, in turns, the intermediate loss, the short-skip connections and the long-skip connection. The results presented in Table II indicate that all these components are relevant to our model, bringing performance benefits in terms of both SSIM and PSNR. In our last ablation experiment, we used a fixed standard deviation instead of a variable one for the Gaussian blur added on training patches. We notice that our data augmentation approach based on a variable standard deviation brings the highest gains in terms of SSIM (from to ) and PSNR (from to ), with respect to the other ablated components. Overall, the ablation study indicates that all the proposed enhancements are useful.

Iv-E Results on CH Data Set

Method
SSIM PSNR SSIM PSNR
Nearest neighbor
Bilinear
Bicubic
Lanczos
Our CNN model
TABLE III: 2D super-resolution results of our CNN model versus several interpolation baselines on the CH data set. The PSNR and the SSIM values are reported for two upscaling factors, and . The best result on each column is highlighted in bold.
Method
SSIM PSNR SSIM PSNR
Nearest neighbor
Bilinear
Bicubic
Lanczos
Our CNN model
TABLE IV: 3D super-resolution results of our CNN model versus several interpolation baselines on the CH data set. The PSNR and the SSIM values are reported for two upscaling factors, and . The best result on each column is highlighted in bold.

We first compare our CNN-based model with a series of interpolation baselines on the CH data set. We present the results for super-resolution on two axes (width and height) in Table III. Among the considered baselines, it seems that the Lanczos interpolation method provides better results than the bicubic, the bilinear or the nearest neighbor methods. Our CNN model is able to surpass all baselines for both upscaling factors, and . Compared to the best interpolation method (Lanczos), our method is better in terms of SSIM and better in terms of PSNR.

We note that, in Table II, we reported an SSIM of and a PSNR of for our method, while in Table III, we reported an SSIM of and a PSNR of . In order to boost the performance of our method according to the observed differences between Tables II and III, we employed the self-ensemble strategy used by Lim et al. [36]. For each input image, the self-ensemble strategy consists in generating additional images using geometric transformations, e.g. rotations and flips. Following Lim et al. [36], we generate 7 augmented images from the LR input image, upsampling all 8 images (the original image and the 7 additional ones) using our CNN. We then apply the inverse transformations to the resulting 8 HR images in order to obtain 8 output images that are aligned with the ground-truth HR images. The final output image is obtained by taking the median of the HR images. In the following experiments on CH and NAMIC data sets, the reported results always include the described self-ensemble strategy.

We provide the results for super-resolution on all three axes in Table IV. First of all, we notice that the SSIM and the PSNR values are lower for all methods when dealing with 3D super-resolution (Table IV) instead of 2D super-resolution (Table III). This shows that the task of 3D super-resolution is much harder than 2D super-resolution, as expected. Nevertheless, our method exhibits smaller performance drops when going from 2D super-resolution to 3D super-resolution. As for the 2D super-resolution experiments on CH data set, our CNN model for 3D super-resolution is superior to all baselines for both upscaling factors. We thus conclude that our CNN model is better than all interpolation baselines on the CH data set, for both 2D and 3D super-resolution and for all upscaling factors.

Iv-F Results on NAMIC Data Set

2D super-resolution 3D super-resolution
Method T1-weighted T2-weighted T1-weighted T2-weighted
SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR SSIM PSNR
Lanczos
Zeng et al. [17] (SCSR)
Zeng et al. [17] (MCSR)
Du et al. [3]
Pham et al. [13]
Our CNN model
TABLE V: 2D and 3D super-resolution results of our CNN model versus several state-of-the-art methods [3, 13, 17] and the Lanczos interpolation baseline on the NAMIC data set. For Zeng et al. [17], we included results for both single-channel super-resolution (SCSR) and multi-channel super-resolution (MCSR). The PSNR and the SSIM values are reported for both T1w and T2w images and for two upscaling factors, and . The best results on each column are highlighted in bold.
Fig. 6: Image super-resolution examples selected from the NAMIC data set. In order to obtain the input images of pixels, the original NAMIC images were downsampled by a scale factor of . HR images of pixels generated by Lanczos interpolation and by our CNN model are compared with the original (ground-truth) images.

On the NAMIC data set, we compare our method with the best-performing interpolation method on the CH data set, namely Lanczos interpolation, as well as the state-of-the-art methods [3, 13, 17] reporting results on NAMIC. We present the corresponding 2D and 3D super-resolution results in Table V.

We note that most previous works, including [3, 13, 17], used bicubic interpolation as a relevant baseline. Unlike these works, we opted for Lanczos interpolation, which provided better results than bicubic interpolation and other interpolation methods on the CH data set. The results presented in Table V indicate that not all state-of-the-art methods attain better performance than Lanczos interpolation. This proves that Lanczos interpolation is a much stronger baseline.

While the state-of-the-art methods [3, 13, 17] presented results only for some cases, either 2D super-resolution on T1w images or 3D super-resolution on T2w images, we provide our results for all possible cases on the NAMIC data set. We note that our CNN model surpasses Lanczos interpolation in each and every case. Furthermore, our model provides superior results than all the state-of-the-art methods [3, 13, 17] reporting results on the NAMIC data set.

In addition to the quantitative results shown in Table V, we present qualitative results in Figure 6. We selected 5 examples of 2D super-resolution results generated by Lanczos interpolation and by our CNN model. A close inspection reveals that our results are generally sharper than those of Lanczos interpolation. As also confirmed by the SSIM and PSNR values presented in Table V, the images generated by our CNN are closer to the ground-truth images. At the scale factor of considered in Figure 6, our CNN does not produce any patterns or artifacts that deviate from the ground-truth.

Iv-G Human Evaluation Results

Annotator ID
Our CNN Lanczos Our CNN Lanczos
Doctor #1
Doctor #2
Doctor #3
Doctor #4
Doctor #5
Doctor #6
Person #1
Person #2
Person #3
Person #4
Person #5
Person #6
Person #7
Person #8
Person #9
Person #10
Person #11
Person #12
Overall in %
TABLE VI: Human evaluation results collected from 6 doctors and 12 regular annotators, for the comparison between our CNN-based method versus Lanczos interpolation. For each upscaling factor, each annotator had to select an option for a number of 100 image pairs. To prevent cheating, the randomly picked locations (left or right) for the generated HR images were unknown to the annotators.

We provide the outcome of the human evaluation study in Table VI. The study reveals that both doctors and regular annotators opted for our approach in favor of Lanczos interpolation at an overwhelming rate ( at the scale factor and at the scale factor). For the scale factor, out of annotators preferred the output of our CNN in all the 100 presented cases. We note that doctors #2 and #4 opted for Lanczos interpolation in 15 and 14 cases (for the scale factor), respectively, which was not typical to the other annotators. Similarly, for the scale factor, there are 3 annotators (doctor #4, doctor #5 and person #6) that seem to prefer Lanczos interpolation at a higher rate than the other annotators. After discussing with the doctors about their choices, we discovered that, in most cases, they prefer the sharper output of our CNN. However, the CNN seems to also introduce some reconstruction patterns (learned from training data) that do not correspond to the ground-truth. This phenomenon seems to be more prevalent at the scale factor. This explains why doctors #4 and #5 preferred Lanczos interpolation in more cases than the other doctors. They considered that it is safer to opt for Lanczos interpolation, although the output is blurred and less informative.

Based on our human evaluation study, we concluded with the doctors that going beyond the scale factor, solely with a method based on algorithmic super-resolution, is neither safe (a CNN might introduce too many patterns unrelated to the ground-truth) nor helpful (a standard interpolation method is not informative). Therefore, in order to reach the scale factor of

desired by the doctors, we have to look in other directions in future work. A promising direction is to combine multiple inputs, e.g. by using CT and MRI scans of the same person or by using CT/MRI scans taken at different moments in time (before and after the contrast agent is introduced).

V Conclusion

In this paper, we have presented an approach based on fully-convolutional neural networks for the super-resolution of CT/MRI scans. Our method is able to reliably upscale 3D CT/MRI image up to a scale factor of . We have compared our approach with several baseline interpolation and state-of-the-art methods [3, 13, 17]. The empirical results indicated that our approach provides superior results on both CH and NAMIC data sets. We have also conducted a human evaluation study, showing that our method is significantly better than Lanczos interpolation. The human evaluation study also revealed the limitations of a pure algorithmic approach. The doctors invited in our study concluded that going to a scale factor higher than requires alternative solutions. In future work, we aim to continue our research by extending the proposed CNN method to multi-channel input. This will likely help us in achieving higher upscaling factors, e.g. , required for the accurate diagnostics and treatment of cancer, an actively studied and extremely important research topic [37, 38].

References

  • [1] E. Sert, F. Özyurt, and A. Doğantekin, “A new approach for brain tumor diagnosis system: single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network,” Medical Hypotheses, vol. 133, p. 109413, 2019.
  • [2] Y. Chen, F. Shi, A. G. Christodoulou, Y. Xie, Z. Zhou, and D. Li, “Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network,” in Proceedings of MICCAI, 2018, pp. 91–99.
  • [3] X. Du and Y. He, “Gradient-Guided Convolutional Neural Network for MRI Image Super-Resolution,” Applied Sciences, vol. 9, no. 22, p. 4874, 2019.
  • [4] J. Du, L. Wang, A. Gholipour, Z. He, and Y. Jia, “Accelerated Super-resolution MR Image Reconstruction via a 3D Densely Connected Deep Convolutional Neural Network,” in Proceedings of BIBM, 2018, pp. 349–355.
  • [5] J. Du, Z. He, L. Wang, A. Gholipour, Z. Zhou, D. Chen, and Y. Jia, “Super-resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network,” Neurocomputing, 2019.
  • [6] J. Hatvani, A. Horváth, J. Michetti, A. Basarab, D. Kouamé, and M. Gyöngy, “Deep learning-based super-resolution applied to dental computed tomography,” IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 3, no. 2, pp. 120–128, 2018.
  • [7] J. Hatvani, A. Basarab, J.-Y. Tourneret, M. Gyöngy, and D. Kouamé, “A Tensor Factorization Method for 3-D Super Resolution With Application to Dental CT,” IEEE Transactions on Medical Imaging, vol. 38, no. 6, pp. 1524–1531, 2018.
  • [8] Y. Huang, L. Shao, and A. F. Frangi, “Simultaneous Super-Resolution and Cross-Modality Synthesis of 3D Medical Images Using Weakly-Supervised Joint Convolutional Sparse Coding,” in Proceedings of CVPR, 2017, pp. 5787–5796.
  • [9] J. Jurek, M. Kocinski, A. Materka, M. Elgalal, and A. Majos, “Cnn-based superresolution reconstruction of 3d mr images using thick-slice scans,” Biocybernetics and Biomedical Engineering, vol. 40, no. 1, pp. 111–125, 2019.
  • [10] Y. Li, B. Song, J. Guo, X. Du, and M. Guizani, “Super-Resolution of Brain MRI Images Using Overcomplete Dictionaries and Nonlocal Similarity,” IEEE Access, vol. 7, pp. 25 897–25 907, 2019.
  • [11] D. Mahapatra, B. Bozorgtabar, and R. Garnavi, “Image super-resolution using progressive generative adversarial networks for medical image analysis,” Computerized Medical Imaging and Graphics, vol. 71, pp. 30–39, 2019.
  • [12] O. Oktay, W. Bai, M. Lee, R. Guerrero, K. Kamnitsas, J. Caballero, A. de Marvao, S. Cook, D. O’Regan, and D. Rueckert, “Multi-input Cardiac Image Super-Resolution Using Convolutional Neural Networks,” in Proceedings of MICCAI, 2016, pp. 246–254.
  • [13] C.-H. Pham, C. Tor-Díez, H. Meunier, N. Bednarek, R. Fablet, N. Passat, and F. Rousseau, “Multiscale brain MRI super-resolution using deep 3D convolutional networks,” Computerized Medical Imaging and Graphics, vol. 77, no. 101647, 2019.
  • [14] J. Shi, Z. Li, S. Ying, C. Wang, Q. Liu, Q. Zhang, and P. Yan, “MR image super-resolution via wide residual networks with fixed skip connection,” IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 3, pp. 1129–1140, 2018.
  • [15] C. You, G. Li, Y. Zhang, X. Zhang, H. Shan, M. Li, S. Ju, Z. Zhao, Z. Zhang, W. Cong et al., “CT super-resolution GAN constrained by the identical, residual, and cycle learning ensemble (GAN-CIRCLE),” IEEE Transactions on Medical Imaging, 2019.
  • [16] H. Yu, D. Liu, H. Shi, H. Yu, Z. Wang, X. Wang, B. Cross, M. Bramler, and T. S. Huang, “Computed tomography super-resolution using convolutional neural networks,” in Proceedings of ICIP, 2017, pp. 3944–3948.
  • [17] K. Zeng, H. Zheng, C. Cai, Y. Yang, K. Zhang, and Z. Chen, “Simultaneous single- and multi-contrast super-resolution for brain MRI images based on a convolutional neural network,” Computers in Biology and Medicine, vol. 99, pp. 133–141, 2018.
  • [18] X. Zhao, Y. Zhang, T. Zhang, and X. Zou, “Channel splitting network for single MR image super-resolution,” IEEE Transactions on Image Processing, vol. 28, no. 11, pp. 5649–5662, 2019.
  • [19] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network,” in Proceedings of CVPR, 2016, pp. 1874–1883.
  • [20] M. Bevilacqua, A. Roumy, C. Guillemot, and M. line Alberi Morel, “Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding,” in Proceedings of BMVC, 2012, pp. 135.1–135.10.
  • [21] Hong Chang, Dit-Yan Yeung, and Yimin Xiong, “Super-resolution through neighbor embedding,” in Proceedings of CVPR, vol. 1, 2004, pp. I–I.
  • [22] D. Dai, R. Timofte, and L. V. Gool, “Jointly Optimized Regressors for Image Super-resolution,” Computer Graphics Forum, vol. 34, pp. 95–104, 2015.
  • [23] J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image Super-Resolution Via Sparse Representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010.
  • [24] C. Dong, C. C. Loy, K. He, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2016.
  • [25] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image Super-Resolution Using Very Deep Residual Channel Attention Networks,” in Proceedings of ECCV, 2018, pp. 294–310.
  • [26] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of NIPS, 2014, pp. 2672–2680.
  • [27] J. C. Elliott and S. D. Dover, “X-ray microtomography,” Journal of Microscopy, vol. 126, no. 2, pp. 211–213, 1982.
  • [28] C. Enders, E.-M. Braig, K. Scherer, J. U. Werner, G. K. Lang, G. E. Lang, F. Pfeiffer, P. Noël, E. Rummeny, and J. Herzen, “Advanced Non-Destructive Ocular Visualization Methods by Improved X-Ray Imaging Techniques,” PLoS One, vol. 12, p. e0170633, 2017.
  • [29] G. Tromba, R. Longo, A. Abrami, F. Arfelli, A. Astolfo, P. Bregant, F. Brun, K. Casarin, V. Chenda, D. Dreossi, M. Hola, J. Kaiser, L. Mancini, R. H. Menk, E. Quai, E. Quaia, L. Rigon, T. Rokvic, N. Sodini, D. Sanabor, E. Schultke, M. Tonutti, A. Vascotto, F. Zanconati, M. Cova, and E. Castelli, “The SYRMEP Beamline of Elettra: Clinical Mammography and Bio‐medical Applications,” AIP Conference Proceedings, vol. 1266, no. 1, pp. 18–23, 2010.
  • [30] C. H. McCollough, S. Leng, L. Yu, and J. G. Fletcher, “Dual- and Multi-Energy CT: Principles, Technical Approaches, and Clinical Applications,” Radiology, vol. 276, no. 3, pp. 637–653, 2015.
  • [31] J. Doerner, M. Hauger, T. Hickethier, J. Byrtus, C. Wybranski, N. G. Hokamp, D. Maintz, and S. Haneder, “Image quality evaluation of dual-layer spectral detector CT of the chest and comparison with conventional CT imaging,” European Journal of Radiology, vol. 93, pp. 52–58, 2017.
  • [32]

    V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” in

    Proceedings of ICML, 2010, pp. 807–814.
  • [33] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of ICLR, 2015.
  • [34] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  • [35] Z. Wang and A. C. Bovik, “Mean squared error: Love it or leave it? A new look at signal fidelity measures,” Signal Processing Magazine, vol. 26, no. 1, pp. 98–117, 2009.
  • [36] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” in Proceedings of CVPRW, 2017, pp. 1132–1140.
  • [37] C. Popa, N. Verga, M. Patachia, S. Banita, and C. Matei, “Advantages of laser photoacoustic spectroscopy in radiotherapy characterization,” Romanian Reports in Physics, vol. 66, pp. 120–126, 2014.
  • [38] D. Sardari and N. Verga, “Calculation of externally applied electric field intensity for disruption of cancer cell proliferation,” Electromagnetic Biology and Medicine, vol. 29, no. 1–12, pp. 26–30, 2010.