Wavelet Domain Residual Network (WavResNet) for Low-Dose X-ray CT Reconstruction

03/04/2017 ∙ by Eunhee Kang, et al. ∙ KAIST 수리과학과 0

Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT are computationally complex because of the repeated use of the forward and backward projection. Inspired by this success of deep learning in computer vision applications, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the texture are not fully recovered, which was unfamiliar to some radiologists. To cope with this problem, here we propose a direct residual learning approach on directional wavelet domain to solve this problem and to improve the performance against previous work. In particular, the new network estimates the noise of each input wavelet transform, and then the de-noised wavelet coefficients are obtained by subtracting the noise from the input wavelet transform bands. The experimental results confirm that the proposed network has significantly improved performance, preserving the detail texture of the original images.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

Code Repositories

low_dose_CT

Deep Convolutional Framelet Denoising for Low-Dose CT via Wavelet Residual Network


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Due to the risk of radiation exposure, methods for minimizing X-ray dose have been extensively studied. A reduction in the number of X-ray photons emitted can solve the problem of radiation exposure. However, it brings the low signal-to-ratio (SNR) measurements that cause the noise in the reconstruction results. The noise of low-dose CT is usually approximated by Gaussian model, but CT specific streaking noise is also included in the low-dose CT images. Streaking noise occurs due to the photon starvation and beam hardening from sophisticated non-linear X-ray photon acquisition processes.To cope with these problems, model based iterative reconstruction (MBIR) algorithms have been investigated [beister2012iterative, ramani2012splitting]. However, MBIR have the limitations because of computationally extensive iterative applications of forward and backward projection.

Nowadays, extensive data is available, so it is desirable to use this database. In the computer vision community, deep convolution neural network (CNN) were actively investigated using the large data and high-performance graphical processing units (GPUs) [krizhevsky2012imagenet]

. With the developments of new network units such as rectified linear unit(ReLU), max pooling and batch normalization, the classical training problems are solved and the networks are given deep structures. Deep network has achieved the great performance improvement in low-level computer vision applications such as denoising

[chen2015learning]

and super-resolution

[dong2014learning].

In the area of medical imaging, there are also extensive research activities that use deep learning. However, most of these studies focus on image-based diagnostics, and its applications for image reconstruction problems such as x-ray computed tomography (CT) reconstruction are relatively less studied. Recently, we have introduced a wavelet domain deep learning algorithm for low-dose X-ray CT algorithm [kang2016deep], whose validity has been rigorously confirmed by winning the second place award in AAPM Low-Dose CT Grand Challenges. However, in this earlier work, the reconstruction results lost some texture of the original images. Therefore, one of the most important contributions of this paper is the development of a drastically improved deep network, which overcomes the limitations of previous work by maintaining detailed textures and edges to improve the performance.

The key to such an improvement is the observation that the low-dose noise artifacts in wavelet domain has a much simpler topology than the original full-dose images so that the learning of the artifact signal is easier than learning the full-dose images. Once the noise in the wavelet domain is estimated, the denoised wavelet coefficients are obtained by subtracting the estimated noise from the wavelet coefficients of input low-dose X-ray CT images. Then, the final image is obtained by executing wavelet recomposition. Because the learning is done to estimate the wavelet domain residual signals, we call the new deep learning algorithm as wavelet domain residual network (WavResNet).

Fig. 1: The proposed WavResNet architecture for low-dose X-ray CT reconstruction.

Ii Theory

Ii-a Deep learning in higher dimensional feature space

In a learning problem, based on a observation (input) and a label generated by a distribution , we are interested in estimating a regression function in a functional space that minimizes the risk

However, an important technical issue is that the associated probability distribution

is unknown. Moreover, we only have a finite sequence of training data set , so there is only an empirical risk A direct minimization of empirical risk is, however, problematic because of the overfitting.

To solve this problem, statistical learning theory

[anthony2009neural] has been developed to bound the risk of a learning algorithm in terms of complexity (eg. VC dimension, shatter coefficients, etc) and the empirical risk. Rademacher complexity [bartlett2002rademacher] is one of the most modern notions of complexity, which is distribution-dependent and defined for any class of real-valued functions. Specifically, with probability , for every function ,

(1)

where the empirical Rademacher complexity is defined to be

where

are independent random variables that are uniformly chosen from

.

Here, empirical risk is determined by the representation power of the network [telgarsky2016benefits], while the complexity is determined by the structure of a network. The capacity of functions grows exponentially with respect to the number of hidden units [telgarsky2016benefits]. When the network architecture is determined, its capacity is fixed. Therefore, the performance of the network is now dependent on the complexity of the label that a given deep network tries to approximate.

One of the most important contributions of this paper is to extend this idea to a novel deep network design principle. Specifically, for a given deep network , our design goal is to find and maps that embed the input and label data sets to high dimensional features space. Thus, the resulting data sets and may have simpler data manifold. This can be shown in the following diagram: