Neural Proximal Gradient Descent for Compressive Imaging

06/01/2018 ∙ by Morteza Mardani, et al. ∙ Stanford University 0

Recovering high-resolution images from limited sensory data typically leads to a serious ill-posed inverse problem, demanding inversion algorithms that effectively capture the prior information. Learning a good inverse mapping from training data faces severe challenges, including: (i) scarcity of training data; (ii) need for plausible reconstructions that are physically feasible; (iii) need for fast reconstruction, especially in real-time applications. We develop a successful system solving all these challenges, using as basic architecture the recurrent application of proximal gradient algorithm. We learn a proximal map that works well with real images based on residual networks. Contraction of the resulting map is analyzed, and incoherence conditions are investigated that drive the convergence of the iterates. Extensive experiments are carried out under different settings: (a) reconstructing abdominal MRI of pediatric patients from highly undersampled Fourier-space data and (b) superresolving natural face images. Our key findings include: 1. a recurrent ResNet with a single residual block unrolled from an iterative algorithm yields an effective proximal which accurately reveals MR image details. 2. Our architecture significantly outperforms conventional non-recurrent deep ResNets by 2dB SNR; it is also trained much more rapidly. 3. It outperforms state-of-the-art compressed-sensing Wavelet-based methods by 4dB SNR, with 100x speedups in reconstruction time.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Linear inverse problems appear broadly in image restoration tasks, in applications ranging from natural image superresolution to biomedical image reconstruction. In such tasks, one oftentimes encounters a seriously ill-posed recovery task, which necessitates regularization with proper statistical priors. This is however impeded by the following challenges: c1) real-time and interactive tasks demand a low overhead for inference; e.g., imagine MRI visualization for neurosurgery clearpoint , or, interactive superresolution on cell phones romano2017raisr ; c2) the need for recovering plausible images that are consistent with the physical model; this is particularly important for medical diagnosis, which is sensitive to artifacts; c3) and limited labeled training data especially for medical imaging.

Conventional compressed sensing (CS) relies on sparse coding of images in a proper transform domain via a universal -regularization; see e.g., donoho2006compressed ; pualy_mri20017 ; duarte2009learning . To automate the time-intensive iterative soft-thresholding algorithm (ISTA) for sparse coding, gregor2010learning puts forth the learned ISTA (LISTA). Relying on soft-thresholding it trains a simple (single dense layer) recurrent network to map measurements to the sparse code as a surrogate for the code. sprechmann2015learning advocates a wider class of functions derived from proximal operators. xin2016maximal also adopts LSTMs to learn the minimal sparse code, where the learned network was seen to improve the RIP of coherent dictionaries. Sparse recovery however is the common objective of xin2016maximal ; gregor2010learning , and the measurement model is not explicitly taken into account. No guarantees were also provided for the convergence and quality of the iterates.

Deep neural networks have recently proven quite powerful in modeling prior distributions for images 

gan-goodfellow2014 ; johnson2016 ; lsgan2017 ; leding_photorealistic_2016 ; inpainting-yeh-2016 ; lossfunction_zhao2017 . There is a handful of recent attempts to integrate the priors offered by generative nets for inverting linear inverse tasks dealing with local image restoration such as superresolution johnson2016 ; leding_photorealistic_2016 , inpainting inpainting-yeh-2016 ; and more global tasks such as biomedical image reconstruction Majumdar_autoencoder_15 ; sun2016deep ; lowdose_ct2017 ; mardani2017deep ; automap_ismrm_2017 ; partial_fourier_cnn_ismrm_2017 ; cascade_cnn_mrreocn_ismrm_2017 ; cs_residual_learning_ismrm_2017

. One can divide them into two main categories, with the first category being the post-processing methods that train a deep network to map a poor (linear) estimate of the image to the true one 

Majumdar_autoencoder_15 ; lowdose_ct2017 ; automap_ismrm_2017 ; cs_residual_learning_ismrm_2017 ; johnson2016 ; leding_photorealistic_2016 ; mardani2017deep . Residual networks (ResNets) are a suitable choice for training such deep nets due to their stable training behavior resnet2016 along with pixel-wise and perceptual costs induced e.g., by generative adversarial networks (GANs) gan-goodfellow2014 ; mardani2017deep . The post-processing schemes offer a clear gain in computation time, but they offer no guarantee for data fidelity. Their accuracy is also only comparable with CS-based iterative methods. The second category is inspired by unrolling the iterations of classical optimization algorithms, and learns the filters and nonlinearities by training deep CNNs diamond2017unrolled ; sun2016deep ; metzler2017learned ; adler2017learned . They improve the accuracy relative to CS, but deep denoising CNNs that are changing over iterations incur a huge training overhead. Note also that for a signal that has a low-dimensional code under a deep pre-trained generative model, Bora_dimakis_2017 ; hand2017global establishes reconstruction guarantees. The inference however relies on a iterative procedure based on empirical risk minimization that is quite time intensive for real-time applications.

Contributions. Aiming for rapid, feasible, and plausible image recovery in ill-posed linear inverse tasks, this paper puts forth a novel neural proximal gradient descent algorithm that learns the proximal map using a recurrent ResNet. Local convergence of the iterates is studied for the inference phase assuming that the true image is a fixed point for a proximal (lies on a manifold represented by proximal). In particular, contraction of the learned proximal is empirically analyzed to ensure the RNN iterates converge to the true solution. Extensive evaluations are examined for the global task of MRI reconstruction, and a local task of natural image superresolution. We find:

  • For MRI reconstruction, it works better to repeat a small ResNet (with a single RB) several times than to build a general deep network.

  • Our recurrent ResNet architecture outperforms general deep network schemes by about 2dB SNR, with much less training data needed. It is also trained much more rapidly.

  • Our architecture outperforms existing state-of-the-art CS-WV schemes, with a 4dB gain in SNR, while achieving reconstruction with 100x reduction in computing time.

These findings rest on several novel project contributions:

  • Successful design and construction of a neural proximal gradient descent scheme based on recurrent ResNets.

  • Rigorous experimental evaluations, both for undersampled pediatric MRI data, and for superresolving natural face images, comparing our proposed architecture with conventional non-recurrent deep ResNets and with CS-WV.

  • Formal analysis of the map contraction for the proximal gradient algorithm with accompanying empirical measurements.

2 Preliminaries and problem statement

Consider an ill-posed linear system with where , and captures the noise and unmodeled dynamics. Suppose the unknown and (complex-valued) image lies in a low-dimensional manifold. No information is known about the manifold besides the training samples drawn from it, and the corresponding (possibly) noisy observations . Given a new undersampled observation , the goal is to quickly recover a plausible image .

The stated problem covers a wide range of image restoration tasks. For instance, in medical image reconstruction,

describes a projection driven by physics of the acquisition system (e.g., Fourier transform for MRI scanner, and Radon transform for the CT scanner). For image superresolution it is the downsampling operator that averages out nonoverlapping image regions to arrive at a low-resolution image. Given an image prior distribution, one typically forms a maximum-likelihood estimator formulated as a regularized least-squares (LS) program

(1)

with the regularizer parameterized by that incorporates the image prior.

In order to solve (P1) one can adopt a variation of proximal gradient algorithm parikh2014proximal with a proximal operator that depends on the regularizer  parikh2014proximal . Starting from , and adopting a small step size the overall iterative procedure is expressed as

(2)

For convex function , the proximal map is monotone, and the fixed point of (2) coincides with the global optimum for (P1) parikh2014proximal . For some simple prior distributions, the proximal operation is tractable in closed-form. One popular example of such a proximal pertains to -norm regularization for sparse coding, where the proximal operator gives rise to soft-thresholding and shrinkage in a certain domain such as Wavelet, or, Fourier. The associated iterations have been labeled ISTA; the related FISTA iterations offer accelerated convergence beck2009fast .

3 Neural Proximal learning

Motivated by the proximal gradient iterations in (2), to design efficient network architectures that automatically invert linear inverse tasks, the following questions need to be first addressed:

Q1. How to ensure rapid inference with affordable training for real-time image recovery?

Q2. How to ensure plausible reconstructions that are physically feasible?

3.1 Deep recurrent network architecture

The recursion in (2) can be envisioned as a feedback loop which at the -th iteration takes an image estimate , moves it towards the affine subspace of data consistent images, and then applies the proximal operator to obtain . The iterations adhere to the state-space model

(3)
(4)

where is the gradient descent step that encourages data consistency. The initial input is , with initial state that is a linear (low-quality) image estimate. The state variable is essentially a linear network with the learnable step size that linearly combines the linear image estimate with the output of the previous iteration, namely .

In order to model the proximal mapping we use a homogeneous recurrent neural network depicted in Fig. 

1. In essence, a truncated RNN with iterations is used for training. The measurement forms the input variables for all iterations, which together with the output of the previous iteration form the state variable for the current iteration. The proximal operator is modeled via a possibly deep neural network, as will be elaborated in the next section. As argued earlier, the proximal resembles projection onto the manifold of visually plausible images. Thus, one can interpret as a denoiser that gradually removes the aliasing artifacts from the input image.

Figure 1: Truncated RNN architecture for neural proximal learning with iterations. is modeled with a multi-layer NN.

3.2 Proximal modeling

We consider a

-layer neural network with element-wise activation function

. We study several examples of the mask function

, including the step function for ReLU, and the sigmoid function for Swish 

ramachandran2018searching . The -th layer maps to through

where the bias term is included in the weight matrix. At the -th iteration, the network starts with the input , and outputs . Typically, the linear weights

are modeled by a convolution operation with a certain kernel and stride size. The network weights collected in

then parameterize the proximal. To avoid vanishing gradients associated with training RNNs we can use ResNets resnet2016 or, highway nets greff2016highway . An alternate path to our model goes via DiracNets zagoruyko2017diracnets with , which are shown to exhibit similar behavior as ResNet.

3.3 Neural proximal training

In order to learn the proximal map, the recurrent neural network in Fig. 1 is trained end-to-end using the population of training data and . For the measurement , RNN with iterations recovers , where the composite map is parameterized by the network training weights and step size . Let denote the population of recovered images. In general, one can use the population-wise costs such as GANs gan-goodfellow2014 ; mardani2017deep , or, the element-wise costs such as to penalize the difference between and . To ease the exposition, we adopt the element-wise empirical risk minimization

for some , where a typical choice for loss is MSE, i.e., . The second term encourages the outputs of different iterations to be consistent with the measurements. It is found to significantly improve training convergence of RNN for large iteration numbers . Note, one can additionally augment (P1) with adversarial GAN loss as in our companion work mardani2017deep that favors more the image perceptual quality that is critical in medical imaging.

4 Contraction Analysis

Consider the trained RNN in Fig. 1. In the inference phase with a new measurement , we are motivated to study whether the iterates in (3)-(4) converge, their speed of convergence, and whether upon convergence they coincide with the true unknown image. To make the analysis tractable, the following assumptions are made:

(A1) The measurements are noiseless, namely, . and the true image is close to a fixed point of the proximal operator, namely for some small .

The fixed point assumption seems to be an stringent requirement, but it is typically made in this context to make the analysis tractable; see e.g., Bora_dimakis_2017 . It roughly means that the images lie on a manifold 111We use here the term manifold purely in an informal sense. represented by the map . Assuming that the train and test data lie on the same manifold, one can enforce it during the training by adding a penalty term to (P1).

The mask can then be decomposed as

(5)

where is the true mask, and models the perturbation. Passing the input image into the -layer neural network then yields the output

(6)

where . One can further write as

(7)

Let us define the residual operator

(8)

It can then be expressed as with

(9)

The term captures the mask perturbation in every -subset of the layers.

Rearranging the terms in (6), and using the assumption (A1), namely for some representation error such that , and the noiseless model , we arrive at

(10)

To study the contraction property and thus local convergence of the iterates to the true solution , let us first suppose that the perturbation at -th iteration belongs to the set . We then introduce the contraction parameter associated with as

(11)

Similarly, for the perturbation map define the contraction parameter

(12)

Applying triangle inequality to (10), one then simply arrives at

(13)
(14)

According to (14), for small values a sufficient condition for (asymptotic) linear convergence of the iterates to true is that . For the non-negligible representation error , if one wants the iterates to converge within a -ball of , i.e., , a sufficient condition is that .

Motivated by real-time applications, e.g., in MRI neurosurgery visualization, it is of high interest to use the minimum iteration count that algorithm reaches within a close neighborhood of . Our conjecture is that for a reasonably expressive neural proximal network, the perturbation masks become highly sparse for the perturbed layers over the iterations so as for some small . Further analysis of this phenomenon, and establishing guarantees under simple and interpretable conditions in terms of network parameters is an important next step. This is the subject of our ongoing research, and will be reported elsewhere. Nonetheless, the next section provides empirical observations about the contraction parameters, where in particular is observed to be an order-of-magnitude larger than .

Remark 1 [De-biasing].

 In sparse linear regression, LASSO is used to obtain a sparse solution that is possibly biased, while the support is accurate. The solution can then be de-biased by solving a LS program given the LASSO support. In a similar manner, neural proximal gradient descent may introduce a bias due to e.g., the representation error

. To reduce the bias, after the convergence of iterates to , one can fix the masks at all layers and replace the proximal map with the linear map , and then find another fixed point for the iterates (6).

5 Experiments

Performance of our novel neural proximal gradient descent scheme was assessed in two tasks: reconstructing pediatric MR images from undersampled k-space data; and superresolving natural face images. In the first task, undersampling k-space introduces aliasing artifacts that globally impact the entire image, while in the second task the blurring is local. While our focus is mostly on MRI, experiments with the image superresolution task are included to shed some light on the contraction analysis in previous section. In particular, we aim to address the following questions:

Q1. What is the performance compared with the conventional deep architectures and with CS-MRI?

Q2. What is the proper depth for the proximal network, and number of iterations (T) for training?

Q3. Can one empirically verify the deep contraction conditions for the convergence of the iterates?

5.1 ResNets for proximal training

To address the above questions, we adopted a ResNet with a variable number of residual blocks (RB). Each RB consisted of two convolutional layers with kernels and a fixed number of

feature maps, respectively, that were followed by batch normalization (BN) and ReLU activation. We followed these by three simple convolutional layers with

kernels, where the first two layers undergo ReLU activation.

We used the Adam SGD optimizer with the momentum parameter , mini-batch size , and initial learning rate that is halved every

K iterations. Training was performed with TensorFlow interface on an NVIDIA Titan X Pascal GPU with 12GB RAM. The source code for TensorFlow implementation is publicly available in the Github page 

github-npgd-2018 .

5.2 MRI reconstruction and artifact suppression

Performance of our novel recurrent scheme was assessed in removing k-space undersampling artifacts from MR images. In essence, the MR scanner acquires Fourier coefficients (k-space data) of the underlying image across various coils. We focused on a single-coil MR acquisition model, where for the -th patient, the acquired k-space data admits

(15)

Here, refers to the 2D Fourier transform, and the set indexes the sampled Fourier coefficients. Just as in conventional CS MRI, we selected based on variable-density sampling with radial view ordering that is more likely to pick low frequency components from the center of k-space pualy_mri20017 . Only of Fourier coefficients were collected.

Dataset. T1-weighted abdominal image volumes were acquired for pediatric patients. Each 3D volume includes axial slices of size pixels. All in-vivo scans were acquired on a 3T MRI scanner (GE MR750) with voxel resolution mm. The input and output were complex-valued images of the same size and each included two channels for real and imaginary components. The input image was generated using an inverse 2D FT of the k-space data where the missing data were filled with zeros (ZF); it is severely contaminated with artifacts.

5.2.1 Performance for various number/size of iterations

In order to assess the impact of network architecture on image recovery performance, the RNN was trained for a variable number of iterations () with a variable number of residual blocks (RBs). K slices ( patients) from the train dataset were randomly picked for training, and slices ( patients) from the test dataset for test. For training RNN, we use cost in (P1) with .

Fig. 2 depicts the SNR and structural similarity index metric (SSIM) wang2004image versus the number of iterations (copies), when proximal network comprises RBs. It is observed that increasing the number of iterations significantly improves the SNR and SSIM, but lead to a longer inference and training time. In particular, using three iterations instead of one achieves more than dB SNR gain for RB, and more than dB for RBs. Interestingly, when using a single iteration, adding more than RBs to make a deeper network does not yield further improvements; the SNR= for RBs, and SNR= for RBs. Notice also that a single RB tends to be reasonably expressive to model the MR image denoising proximal, and as a result, repeating it several times, the SNR does not seem to exceed dB. Using RBs however turns out to be more expressive to learn the proximal, and perform as good as using RBs. Similar observations are made for SSIM.

Figure 2: Average SNR and SSIM versus the number of copies (iterations). Note, single copy ResNet refers to the deep ResNet that is an exiting alternative to our proposed RNN.

Training and inference time. Inference time is proportional to the number of unrolled iterations. Passing each image through one unrolled iteration with one RB takes msec when fully using the GPU. It is hard to precisely evaluate the training and inference time under fair conditions as it strongly depends on the implementation and the allocated memory and processing power per run. Estimated inference times as listed in Table 1 are averaged out over a few runs on the GPU. We observed empirically that with shared weights, e.g., iterations with RB, the training converges in hours. In constrast, training a deep ResNet with RBs takes around hours to converge.

iterations RBs train time (hours) inference time (sec) SNR (dB) SSIM
deep ResNet
CS-TV n/a n/a
CS-WV n/a n/a
Table 1: Performance trade-off for various RNN architectures.
Figure 3: A representative axial abdominal slice for a test patient reconstructed by zero-filling (1st column); CS-WV (2nd column); deep ResNet with RBs (3rd column); and neural proximal gradient descent with iterations and RBs (4th column), iterations and RBs (5th column), iterations and RBs (6th column); and the gold-standard (7th column).

5.2.2 Comparison with sparse coding

To compare with conventional CS-MRI, CS-WV is tuned for best SNR performance using BART bart2016 that runs iterations of FISTA along with iterations of conjugate gradient descent to reach convergence. Quantitative results are listed under Table 1, where it is evident that the recurrent scheme with shared weights significantly outperforms CS with more than dB SNR gain that leads to sharper images with finer texture details as seen in Fig. 3. As a representative example, Fig. 3 depicts the reconstructed abdominal slice of a test patient. CS-WV retrieves a blurry image that misses out the sharp details of the liver vessels. A deep ResNet with one iteration and RBs captures a cleaner image, but still blurs out fine texture details such as vessels. However, when using unrolled iterations with a single RB for proximal modeling, more details of the liver vessels are visible, and the texture appears to be more realistic. Similarly, using iterations and RBs retrieves finer details than iterations with relatively large RBs network for proximal.

In summary, we make three key findings:

F1. The proximal for denoising MR images can be well represented by training a ResNet with a small number of RBs.

F2. Multiple back-and-forth iterations are needed to recover a plausible MR image that is physically feasible.

F3. Considering the training and inference overhead and the quality of reconstructed images, RNN with iterations and RB proximal is promising to implement in clinical scanners.

5.3 Verification of the contraction conditions

To verify the contraction analysis developed for Proposition 1, we focus on the image superresolution (SR) task. In this linear inverse task, one only has access to a low-resolution (LR) image downsampled via the convolution kernel . To form , the image pixels in non-overlapping regions are averaged out. SR is a challenging ill-posed problem, and has been subject of intensive research; see e.g., bruna2015super ; Sonderby_2014 ; johnson2016 ; romano2017raisr . Our goal is not to achieve state-of-the-art performance, but to only study the behavior of proximal learning.

CelebA dataset. Adopting celebFaces Attributes Dataset (CelebA) liu2015faceattributes , for training and test we use K and images, respectively. Ground-truth images has pixels that is downsampled to LR images.

The proximal net is modeled as a multi-layer CNN with the Smash nonlinearity ramachandran2018searching in the last layer. The hidden layers undergo no nonlinearity and the kernel size and are adopted. The proximal then admits as per (6). RNN with is trained, and normalized RMSE, i.e., is plotted versus the iteration index in Fig. 5 (top) for various kernel sizes. It decreases quickly and after a few iterations it converges which suggests that the converged solution is possibly a fixed point for the proximal map. For a representative face image, output of different iterations as well as the ground-truth are plotted in Fig. 4. Apparently, the resolution improves over the iterations.

Figure 4: Superresolved () face images at different iterations () compared with the ground-truth (). Proximal is a three-layer CNN with kernel size .

The contraction parameters are also plotted in Fig. 5. The space of perturbations for the operator norm are limited to the admissible ones that inherit the structure of iterations. For the -th test sample, we inspect the behavior , where . The corresponding error bars are then plotted in Fig. 5 for kernel size . It is apparent that and quickly decay across iterations, indicating that later iterations produce perturbations that are more incoherent to the proximal map. Also, we can see that converges to a level that represents the bias generated by the iterates, similar to the bias introduced in LASSO. In addition, one can observe that is the dominant term - usually an order of magnitude larger than .

Figure 5: The top figure is normalized RMSE evolution over iterations for image superresolution task with different kernel sizes. The bottom ones are also the error bar for and per iteration for image superresolution where the proximal is a -layer CNN.

6 Conclusions

This paper develops a novel neural proximal gradient descent scheme for recovery of images from highly compressed measurements. Unrolling the proximal gradient iterations, a recurrent architecture is proposed that models the proximal map via ResNets. For the trained network, contraction of the proximal map and subsequently the local convergence of the iterates is studied and empirically evaluated. Extensive experiments are performed to assess various network wirings, and to verify the contraction conditions in reconstructing MR images of pediatric patients, and superresolving natural images. Our findings for MRI indicate that a small ResNet can effectively model the proximal, and significantly improve the quality and complexity of recent deep architectures as well as conventional CS-MRI.

While this paper sheds some light on the local convergence of neural proximal gradient descent, our ongoing research focuses on a more rigorous analysis to derive simple and interpretable contraction conditions. The main challenge pertains to understanding the distribution of activation masks that needs extensive empirical evaluation. Stable training of RNN for large iteration numbers using gated recurrent network is also another important focus of our current research.

7 Acknowledgements

We would like to acknowledge Dr. Marcus Alley from the Radiology Department at Stanford University for setting up the infrastructure to automatically collect the MRI dataset used in this paper. We would also like to acknowledge Dr. Enhao Gong, and Dr. Joseph Cheng for fruitful discussions and their feedback about the MRI reconstruction and software implementation.

References