I Introduction
We consider a complex linear regression problem, where complex data vector
is formed of noisy linear measurements of a signal of interest :(1) 
where and , where is the identity matrix. Here,
denotes the complex normal distribution with mean
, covariance and white phase. A wellstudied approach is to seek a solution of(2) 
where is a modelbased penalty function. Compressed sensing [8, 7] concerns the reconstruction of from underdetermined measurements . Commonly sparsity in is promoted by solving (2) with for sparse weighting and sparsifying transform .
The Approximate Message Passing (AMP) algorithm [9] is an iterative method that estimates in linear regression problems such as (1). At iteration AMP implements a denoiser on estimate with meansquared error estimate . For instance, for problems of the form of (2), is the proximal operator associated with :
(3) 
which is equal to soft thresholding in the case of and orthogonal . For certain sensing matrices and given mild conditions on , AMP’s state evolution guarantees that in the large system limit , , vector behaves as the original signal corrupted by zeromean white Gaussian noise:
(4) 
where
is an iterationdependant standard deviation. The state evolution of AMP has been proven for real i.i.d. Gaussian measurements in
[4] and i.i.d. subGaussian measurements in [3]. It has also been empirically shown that state evolution holds for uniformly undersampled Fourier measurements of a random artificial signal [9]. When state evolution holds, AMP is known to exhibit very fast convergence. However, for generic , the behavior of AMP is not well understood and it has been noted by a number of authors [21, 6, 15, 12, 22] that it can encounter convergence problems. The recently proposed Vector Approximate Message Passing (VAMP) [20] algorithm broadened the class of measurement matrices for which (4) holds, namely to those matrices that are ‘rightorthogonally invariant’, and was also found to perform very well on certain reconstruction tasks.Ia Approximate message passing for compressed sensing MRI
In compressed sensing MRI [14], measurements are formed of undersampled Fourier coefficients, so that , where
is a 2D or 3D discrete Fourier transform and
is a undersampling mask that selects the th row of if for sampling set . The signal of interest is a natural image, so typically has a highly nonuniform spectral density that is concentrated at low frequencies. Accordingly, the sampling setis usually generated such that there is a higher probability of sampling lower frequencies. This work considers an
with elements drawn independently from a Bernoulli distribution with nonuniform probability, such that
. In this variable density setting there are no theoretical guarantees for AMP or VAMP and in practice the behavior of (4) is not observed and the algorithms typically perform poorly.This letter presents a new method for undersampled signal reconstruction which we term the Variable Density Approximate Message Passing (VDAMP) algorithm. For Fourier coefficients of a realistic image randomly sampled with variable density and orthogonal wavelet we have empirical evidence that a state evolution occurs. Unlike the white effective noise of (4), the
of VDAMP behaves as the ground truth corrupted by zeromean Gaussian noise with a separate variance for each wavelet subband, such that
(5) 
where and the effective noise covariance is diagonal so that for a with decomposition scales
(6) 
where and refer to the variance and dimension of the th subband respectively. We refer to (5) as the colored state evolution of VDAMP.
Selecting appropriate regularisation parameters such as is a notable challenge in realworld compressed sensing MRI applications. We present an approach to parameterfree compressed sensing reconstruction using Stein’s Unbiased Risk Estimate (SURE) [23] in conjunction with VDAMP, building on the work on AMP introduced in [17]. A strength of automatic parameter tuning via SURE is that the it is possible to have a richer regularizer than would usually be feasible for a handtuned . We implement a variation on the SureShrink denoiser [10], using a iterationdependant regularizer that has a separate sparse weighting per subband:
(7) 
where is the dimensional column vector of ones. SURE has previously been employed for parameterfree compressed sensing MRI in [13], where the Fast Iterative ShrinkingThresholding Algorithm (FISTA) algorithm [5] was used with SureShrink in place of the usual shrinkage step. This algorithm is herein referred to as SUREIT. The effective noise of SUREIT is highly nonGaussian, so deviates from a proper theoretical basis for using SURE for threshold selection. To our knowledge, VDAMP is the first algorithm for variable density Fourier sampling of natural images where a state evolution has been observed.
Ii Description of algorithm
For AMP and VAMP, (4) states that the effective noise is white, so can be fully characterised by a scalar . This is appropriate for the kind of uniform measurement matrices and separable, identical sparse signal models that are often encountered in abstract compressed sensing problems. However, when Fourier coefficients of a natural image are sampled the effective noise is colored, so is poorly represented by a scalar [24]. VDAMP models the richer structure of effective noise in the variable density setting with a vector that has one real number per wavelet subband.
The VDAMP algorithm is shown in Algorithm 1. Here, is the diagonal matrix formed of sampling probabilities for . The function refers to some denoiser with a colored effective noise model. The notation in line 7 refers to the (sub)gradient of the denoiser averaged over subbands, so that for decomposition scales has unique entries, having the same structure as the of (7). In line 8, the notation is used for entrywise multiplication and for entrywise division. refers to the entrywise absolute magnitude of a vector or matrix.
The original AMP paper considers and an i.i.d. Gaussian that is normalized such that . This ensures that
is an unbiased estimate of
. For variable density sampling, the correct normalisation can be achieved by rescaling (1) by . In VDAMP this is manifest in the gradient step of lines 34, which feature a crucial weighting by . This provides the correct normalisation in expectation over : , which implies that(8) 
for any . Such a rescaling is referred to as ‘density compensation’ in the MRI literature [18, 19], and was used in the original compressed sensing MRI paper with zerofilling to generate a unregularized, noniterative baseline [14] . Line 5 of Algorithm 1 computes an estimate of the colored effective noise variance . It can be shown that is an unbiased estimate of the expected entrywise squared error:
(9)  
(10) 
for any . We assume both estimators in (8) and (10) concentrate around their expectation, and leave the study of the constraints this imposes on for future works. Note that has unique columns, so for fixed the complexity of VDAMP is governed by and , whose fast implementations have complexity .
Lines 68 are the modelbased regularization phase from VAMP, but with a colored effective noise model, as in [11]. This phase includes the message passing Onsager correction term, which we have observed leads to the Gaussian effective noise of (5). Line 10 implements an unweighted gradient step that enforces exact consistency of the image estimate with the measurements.
Iii Numerical experiments
In the experiments presented in this section the denoiser was the complex soft thresholding operator with an automatically tuned subbanddependant threshold. In other words, we used a complex adaption of SURE to approximately solve
(11) 
where is the entrywise square root of and is of the form of (7). (11) was solved using a procedure similar to the usual SureShrink but with a version of SURE adapted for effective noise that is complex and colored. Consider a ground truth subband corrupted by complex Gaussian noise with white phase: . For complex soft thresholding with threshold , it can be shown that
(12) 
is an unbiased estimate of the risk. For each subband the optimal threshold was estimated via
(13) 
by evaluating (12) for trial thresholds . For large dimension
one would expect by the law of large numbers that cSURE is close to the true risk, and for the threshold to be almost optimal. Since a larger number of decomposition scales
give subbands with lower dimensionality, there is a tradeoff between the size of and the quality of threshold selection with cSURE.We considered the reconstruction of a SheppLogan artificially corrupted with complex Gaussian noise with white phase so that . We assumed that was known a priori; in practice it can be well estimated with an empty prescan. All sampling probabilities were generated using the polynomial variable density sampling function from the Sparse MRI package available at https://people.eecs.berkeley.edu/~mlustig/Software.html. We considered a Haar wavelet at decomposition scales, which are referred to as scales 14, where scale 1 is the finest and scale 4 is the coarsest. All experiments were conducted in MATLAB 9.4 on a 1.70 GHz Intel i58350U processor with 8GB of RAM, and can be reproduced with code available at https://github.com/charlesmillard/VDAMP.
Iiia Comparison with FISTA and SUREIT
To establish comparative reconstruction quality and convergence speed, VDAMP was compared with FISTA and SUREIT. For FISTA we used a sparse weighting tuned with an exhaustive search so that the meansquared error was minimised after 10 seconds. For SUREIT the meansquared error estimate was updated using the ground truth: , and (11) with was implemented. All three algorithms were initialised with a vector of zeros.
Three sampling sets were generated at undersampling factors of approximately 4, 6 and 8, and VDAMP, FISTA and SUREIT were run for 10 seconds. Fig. 1 shows the NMSE as a function of time for each algorithm. at every iteration was calculated so that exact data consistency was ensured, as in line 10 of VDAMP. The mean periteration compute time was 0.065s for FISTA, 0.077s for SUREIT, and 0.091s for VDAMP. Fig. 2 shows the ground truth image, the sampling set, and FISTA and VDAMP reconstruction at undersampling factor 8 after 2s. The entrywise error is also shown for both algorithms.
IiiB State evolution and error tracking
Normalized quantilequantile plots against a Gaussian of three subbands of
for , andin blue, and points along a straight line in red. The real part is plotted in the top and bottom rows and the imaginary is plotted in the middle row. Linearity of the blue points indicates that that the data comes from a Gaussian distribution. Finite dimensional effects causing small deviations from an exact Gaussian are more apparent at coarse scales, where the dimension is smaller.
This section empirically analyses VDAMP in greater depth, continuing to use the illustrative case of undersampling factor 8. In Fig. 4, the absolute value of the residual of is shown for three representative iterations: , and . For the same iterations, Fig. 4 shows quantilequantile plots of the real parts of three illustrative subbands of : the diagonal detail at scale 1, the horizontal detail at scale 2 and the vertical detail scale 4. These figures provide empirical evidence that the effective noise of VDAMP evolves as (5).
Iv Conclusions
Like the original AMP paper [8], the claim of a state evolution has been substantiated with empirical evidence only. Theoretical work is required to establish the generality of the state evolution observed in these experiments.
In order for the algorithm to be compatible with realistic MRI acquisition protocols, practical developments are required to allow sampling with readout lines and multiple coils. The experiments conducted in this letter indicate that given such developments, VDAMP would have a number of significant advantages over algorithms currently used in the compressed sensing MRI literature. VDAMP’s state evolution provides an informed and efficient way to tune model parameters via SURE, implying that a single algorithm can be used for any image type and variable density scheme without the need for manual adjustment. More degrees of freedom are allowed in the model, enabling higher order prior information such as anisotropy, variability across scales and structured sparsity, without the need to estimate the structure a priori such as in modelbased compressed sensing
[2]. The substantial reduction in required compute time observed in these experiments could be clinically valuable as real MRI reconstruction tasks typically have very high dimensionality, so it is often only viable to run a reconstruction algorithm for a limited number of iterations.It is known that the state evolution of AMP holds for a wide range of denoisers [16]. In a recent survey [1] a number of standard compressed sensing algorithms that leverage image denoisers designed for Gaussian noise were shown to perform well on MRI reconstruction tasks, despite the mismatch between the effective noise and its model. A sophisticated denoiser equipped to deal with wavelet coefficients corrupted with known colored Gaussian noise would be expected to perform well in conjunction with VDAMP.
References
 [1] (201903) Plug and play methods for magnetic resonance imaging. External Links: Link Cited by: §IV.
 [2] (201004) ModelBased Compressive Sensing. IEEE Transactions on Information Theory 56 (4), pp. 1982–2001. External Links: Link, Document, ISSN 00189448 Cited by: §IV.

[3]
(201504)
Universality in polytope phase transitions and message passing algorithms
. The Annals of Applied Probability 25 (2), pp. 753–822. External Links: Link, Document, ISSN 10505164 Cited by: §I.  [4] (201102) The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing. IEEE Transactions on Information Theory 57 (2), pp. 764–785. External Links: Link, Document, ISSN 00189448 Cited by: §I.
 [5] (200901) A Fast Iterative ShrinkageThresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences 2 (1), pp. 183–202. External Links: Link, Document, ISSN 19364954 Cited by: §IA.
 [6] (201406) On convergence of approximate message passing. In 2014 IEEE International Symposium on Information Theory, pp. 1812–1816. External Links: Link, ISBN 9781479951864, Document Cited by: §I.
 [7] (200602) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory 52 (2), pp. 489–509. External Links: Link, Document, ISSN 00189448 Cited by: §I.
 [8] (200604) Compressed sensing. IEEE Transactions on Information Theory 52 (4), pp. 1289–1306. External Links: Link, Document, ISSN 00189448 Cited by: §I, §IV.
 [9] (200911) Messagepassing algorithms for compressed sensing.. Proceedings of the National Academy of Sciences of the United States of America 106 (45), pp. 18914–9. External Links: Link, Document, ISSN 10916490 Cited by: §I.
 [10] (199512) Adapting to Unknown Smoothness via Wavelet Shrinkage. Journal of the American Statistical Association 90 (432), pp. 1200–1224. External Links: Link, Document, ISSN 01621459 Cited by: §IA.
 [11] (201602) Expectation consistent approximate inference: Generalizations and convergence. In IEEE International Symposium on Information Theory  Proceedings, Vol. 2016Augus, pp. 190–194. External Links: Link, ISBN 9781509018062, Document, ISSN 21578095 Cited by: §II.
 [12] (201504) Approximate Message Passing with Unitary Transformation. External Links: Link Cited by: §I.
 [13] (201211) Accelerated MR imaging using compressive sensing with no free parameters. Magnetic Resonance in Medicine 68 (5), pp. 1450–1457. External Links: Link, Document, ISSN 07403194 Cited by: §IA.
 [14] (200712) Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine 58 (6), pp. 1182–1195. External Links: Link, Document, ISSN 07403194 Cited by: §IA, §II.
 [15] (2017) Orthogonal AMP. IEEE Access 5, pp. 2020–2033. External Links: Link, Document, ISSN 21693536 Cited by: §I.
 [16] (201609) From Denoising to Compressed Sensing. IEEE Transactions on Information Theory 62 (9), pp. 5117–5144. External Links: Link, Document, ISSN 00189448 Cited by: §IV.
 [17] (201310) Parameterless Optimal Approximate Message Passing. External Links: Link Cited by: §IA.
 [18] (199901) Sampling density compensation in MRI: Rationale and an iterative numerical solution. Magnetic Resonance in Medicine 41 (1), pp. 179–186. External Links: Link, Document, ISSN 07403194 Cited by: §II.
 [19] (200110) Advances in sensitivity encoding with arbitrary kspace trajectories. Magnetic Resonance in Medicine 46 (4), pp. 638–651. External Links: Link, Document Cited by: §II.
 [20] (2019) Vector Approximate Message Passing. IEEE Transactions on Information Theory, pp. 1–1. External Links: Link, Document, ISSN 00189448 Cited by: §I.
 [21] (201406) On the convergence of approximate message passing with arbitrary matrices. In 2014 IEEE International Symposium on Information Theory, pp. 236–240. External Links: Link, ISBN 9781479951864, Document Cited by: §I.
 [22] (201612) Fixed Points of Generalized Approximate Message Passing With Arbitrary Matrices. IEEE Transactions on Information Theory 62 (12), pp. 7464–7474. External Links: Link, Document, ISSN 00189448 Cited by: §I.
 [23] (198111) Estimation of the Mean of a Multivariate Normal Distribution. The Annals of Statistics 9 (6), pp. 1135–1151. External Links: Link, Document, ISSN 00905364 Cited by: §IA.
 [24] (201712) The Empirical Effect of Gaussian Noise in Undersampled MRI Reconstruction. Tomography 3 (4), pp. 211–221. External Links: Link, Document, ISSN 23791381 Cited by: §II.
Comments
There are no comments yet.