1 Introduction
The mathematical problem of compressed sensing (CS) consists in recovering a sparse signal from a small number of measurements. More precisely, we wish to recover a vector
from a vector of measurements , where is the sensing matrix and . The signal is said to be -sparse if it has at most non-zero entries. The recovery is usually performed by solving the following minimization problem called basis pursuit (BP):(1) |
One of the classical CS results can be formulated as follows [7, 6, 10]. Let satisfy the isotropy condition: , and suppose that the measurement matrix is constructed by drawing random rows of in an independent uniform manner. Define the coherence of matrix to be , where are the rows of matrix . This quantity represents the coherence between the sensing and the sparsifying bases (low coherence meaning that a vector in the sparsifying basis is approximately uniformly spread in the sensing basis). If , then an -sparse vector can be exactly recovered by solving (1
) with probability at least
.In many applications (including MRI), the sensing matrix is coherent, meaning that is large. It can be shown that incoherence is typically met between the standard basis and the Fourier basis. However, natural images have sparse representations not in the pixel basis directly, but rather in wavelet bases, i.e. with sparse, which are not incoherent with the Fourier basis.
In practice, uniformly drawn measurements lead to very poor reconstructions. It was observed, however, that MR image reconstruction from undersampled frequencies could be significantly improved by drawing measurements according to variable densities strategies (VDS), preferring low to high frequencies.
VDS strategies have received a justification in the CS literature [6, 8, 9]. If the measurements are drawn independently with the probability to draw the -th measure equal to , then an -sparse vector can be reconstructed exactly from measurements with probability at least provided that .
It was shown experimentally, however, that this result is not sufficient to explain the success of CS in applications such as MRI [1]. It is in particular due to the fact that in the above result we do not assume any structure (apart from sparsity) in the signals to be recovered. A natural extension would be to consider the structured sparsity approach, where one assumes that some prior information on the support is known, e.g. sparsity by level in the wavelet domain (see [1] for a comprehensive theory for Fourier sampling, based on isolated measurements under a sparsity-by-levels assumption in the wavelet domain). This strategy allows to incorporate any kind of prior information on the structure of and to study its influence on the quality of CS reconstructions.
Another obstacle to applying classical CS results in a large number of practical settings is that the isolated measurements are incompatible with the physics of acquisition. For this reason, more recent relevant contributions [5, 2] have addressed structured VDS i.e. over sampling trajectories. This approach allows to give recovery guarantees for block-structured acquisition with an explicit dependency on the support of the vector to reconstruct and it provides many possibilities such as optimizing the drawing probability or identifying the classes of supports recoverable with block sampling strategies.
Instead of drawing rows of , which corresponds to probing isolated points in the frequency domain (k-space) in the context of MRI, it was proposed in [5, 2] to draw blocks of rows, which corresponds to drawing independent Cartesian lines in k-space. This framework does not cover, however, the case of non-Cartesian acquisition, e.g. acquisition along radial spokes, whose intersection is given by the center of k-space, or more complex trajectories often used in MRI (spiral or non-parametric SPARKLING trajectories, see Figure 1).






One important aspect of the non-Cartesian setting from the CS theory viewpoint is that when frequencies of the Fourier transform are not taken to be in
, then the corresponding matrix no longer fulfills the condition . The isotropy condition is violated in the non-Cartesian setting leading to the necessity to develop a theory for anisotropic CS.Some classical CS results were extended to the anisotropic setting in [12]. The authors provided a theoretical bound on the number of measurements necessary for the exact reconstruction of a sparse vector in the case of uniform isolated measurements. Another recent paper [3] on anisotropic CS extends the results of [12] to the infinite-dimensional setting.
In the present work we propose to combine the approaches of [12] and [5, 2] to develop anisotropic CS results that take structured sparsity and variable density structured acquisition into account.
The present note is organised as follows. In section 2 we introduce the notation. In section 3 we present the main result. In section 4 we give the proof of the main theorem. In section 5 we present formulas for optimal sampling densities in the case of isolated measurements and block-structured sampling. The Appendix contains some classical results of probability theory and compressed sensing theory that are used in the proofs of section 4.
Acknowledgements. This work was carried out during a one-year Inria delegation of A. Kazeykina in the Parietal team of Inria at NeuroSpin, CEA Saclay.
2 Notation
Let and , , such that . Let and construct matrix by stacking the blocks on top of each other: . Matrix represents the set of possible measurements imposed by a specific sensor device. We will assume that is invertible.
The sensing matrix is constructed by drawing randomly blocks of rows of matrix . More precisely, let
be a random variable taking values
with probabilities , . For let be i.i.d. copies of the random block . The random sensing matrix is constructed as follows:(2) |
Define
Note that exists since that we assumed to be invertible. Note also that if satisfies the isotropy condition , where
is the identity matrix, then
.Let . Denote . Define to be the matrix of the linear projection , where is the restriction of to the components in . Define quantities , to be positive numbers such that
(3) | |||
(4) |
Note that
and thus, if is taken as the least upper-bound, then
(5) |
Note that in the isotropic case () defined by (3) does not coincide with the quantity introduced in [5, 2]. That is due to the fact that a straightforward generalisation of the definition used in [5, 2] to the anisotropic case does not preserve the relation (5) verified in the isotropic case. To preserve this relation in the anisotropic case we prefer to consider a symmetrised version of .
We will denote by the canonical basis of .
Finally, for a number we denote
and for a vector we define .
3 Main result
Theorem 1.
Let or be a vector supported on , such that forms a Rademacher or a Steinhaus sequence. Let be the random sensing matrix defined by (2) associated with parameter . Suppose we are given the data . Then, given and provided that
(6) |
for a numerical constant, the vector is the unique minimizer of the basis pursuit problem (1) with probability at least .
Remark 1.
Note that the analogous result in the isotropic case (see Theorem 3.3 of [2]) ensures the exact reconstruction of provided that . The possibility to obtain a bound in rather than in is due to the presence of additional symmetries that can be efficiently exploited in the isotropic case (see also Remark 2).
4 Proof
The proof of Theorem 1 is based on the following proposition.
Proposition 1.
Proof This proposition is a corollary of Theorem 4.26 of [10] that we formulate as Theorem 2 of the Appendix (see also Corollary 4.28 of [10]).
Indeed, consider the condition (ii) of Theorem 2. First of all, we note that the fact that is injective implies that is injective. Next, take
where exists due to assumption (i) of the proposition (indeed is the adjoint of the matrix which is invertible because it is square and injective).
Then the condition is satisfied. Further, the condition , is rewritten as , , which is satisfied if (ii) of Proposition 1 is satisfied.
∎
We now formulate and prove two Lemmas that will be used in the proof of Theorem 1.
Lemma 1.
For every with and for every , the following holds
Proof. Let
Then .
We have the following estimate:
Consider , this matrix being self-adjoint we have
Since
we have that . In a similar way,
Finally, the required result follows from Proposition 4 of Appendix; it suffices to set , . ∎
Lemma 2.
Let . Then, for every
Proof. Fix and define , and . Note that .
First, due to the definition of , we can estimate
(7) |
which is due to the following:
Thus the following estimate is true: .
The next required estimate is obtained by using the Cauchy-Schwarz inequality and (7):
We use the independence of the vectors and the fact that they have zero mean value to get the following estimate:
Now we use (7) again to estimate:
which implies
The result then follows from Proposition 5 of Appendix and a union bound.
∎
Remark 2.
Note that for the estimates of Lemmas 1, 2 are looser than those obtained for the isotropic case in [2] (see Lemmas C.1, C.2 of [2]; note instead of and instead of in our results). That is due to the fact that in the isotropic case it is possible to exploit some extra symmetries to obtain tighter estimates.
By Lemma 1, condition (i) of Proposition 1 fails with probability not higher than . The latter expression is bounded by provided that
Now let us study when condition (ii) of Proposition 1 fails. Denote . Then, by union bound,
where the last bound is due to Hoeffding type inequality for Rademacher or Steinhaus sequence (see Propositions 6, 7: we take to be equal to and we set , ).
Now we study the second term. Note that
Take and . Denote . Let be the event that and let be the event that . Note that implies that .
Set .Then implies that , and so means that holds, where denotes the event complementary to . Thus we get the following estimate
Define
By Lemma 1 the probability is bounded by . Thus it can be majorised by if
(8) |
Now take for some . By Lemma 2 probability is bounded by , if
If we assume that , then we can write that is bounded by , if
(9) |
Finally, set , assume
(10) |
and choose with . Then is bounded by if
The latter condition is satisfied due to assumption (10) and the choice of . Indeed, due to (10) we have that
and thus, due to the definition of ,
which implies that
Plugging the chosen value of into (8), we obtain that it suffices to take
for (BP) to have a unique solution with probability . If we choose in the definition (4) to be the least upper-bound, then the inequality (5) implies that it suffices to choose verifying (6) to guarantee the result of Theorem 1.
∎
5 Optimal sampling strategies
In this Section we derive formulas for probabilities minimising the quantity defined by (3) and arising in the bound (6).
We consider the following two principal sampling strategies.
-
Isolated measurements
Let be a set of row vectors. Set , for all and . This setting represents isolated measurements in MRI.
-
Block-structured sampling Let be a set of row vectors and let denote a partition of the set . The rows are then partitioned into blocks : . In this setting and . This setting corresponds to sampling blocks of measurements in MRI.
Define
Proposition 2.
In the setting of isolated measurements the probability minimising is
The corresponding is given by
Define
Proposition 3.
In the setting of block-structured sampling the probability minimising is
The corresponding is given by
6 Conclusion
In the anisotropic setting we provided a theoretical bound on the number of measurements that are needed to reconstruct a sparse signal with large probability. The bound is given in terms of a quantity that allows to take into account a priori information on the sparsity structure of the signal and to apply variable density block-structured sampling strategies. We have provided general formulas for probability distributions optimising the obtained theoretical bound. We hope that these formulas can be further analysed to derive explicit expressions for optimal sampling densities in the context of non-Cartesian MRI.
Appendix
Proposition 4.
(Matrix Bernstein inequality [15])
Consider a finite sequence of independent random matrices. Assume that each random matrix satisfies
Then for all ,
Proposition 5.
(Vector Bernstein inequality [2])
Consider a set of independent random vectors such that
and let such that
Then for every the following holds:
Proposition 6.
(Hoeffding’s bound for Rademacher sequence [10]) Let and be a Rademacher sequence.Then
Proposition 7.
(Hoeffding-type bound for Steinhaus sequence [10]) Let and be a Steinhaus sequence.Then for any
Theorem 2.
(Theorem 4.26 of [10]) Given a matrix , a vector with support is the unique minimizer of subject to if one of the following equivalent conditions holds:
-
-
is injective and there exists a vector such that
References
- [1] Adcock B., Hansen A., Poon C., Roman B. Breaking the coherence barrier: A new theory for compressed sensing. Forum of Mathematics, Sigma. 5 (2017)
- [2] Adcock B., Boyer C., Brugiapaglia S. On oracle-type local recovery guarantees in compressed sensing. arXiv:1806.03789 (2018)
- [3] Alberti G.S., Santacesaria M. Infinite dimensional compressed sensing from anisotropic measurements and applications to inverse problems in PDE. Appl. Comput. Harmon. Anal. https://doi.org/10.1016/j.acha.2019.08.002 (2019)
- [4] Boada F., Gillen J., Shen G., Chang S., Thulborn K. Fast three dimensional sodium imaging. Magnetic resonance in medicine. 37(5), 706–715 (1997)
- [5] Boyer C., Bigot J., Weiss P. Compressed sensing with structured sparsity and structured aquisition. Appl. Comput. Harmon. Anal. 46(2), 312–350 (2019)
- [6] Candes E., Plan Y. A probabilistic and ripless theory of compressed sensing. IEEE Trans. Inf. Theory. 57(11) 7235–7254 (2011)
- [7] Candes E., Romberg J., Tao T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006)
-
[8]
Chauffert N., Ciuciu P., Weiss P. Variable density compressed sensing in MRI. Theoretical vs heuristic sampling strategies. IEEE 10th International Symposium on Biomedical Imaging. 298–301 (2013)
- [9] Chauffert N., Ciuciu P., Kahn, J. Weiss P. Variable density sampling with continuous trajectories. Application to MRI. SIAM J. Imaging Sci. 7(4), 1962–1992, (2014)
- [10] Foucart S., Rauhut S. A mathematical introduction to compressive sensing. Birkhäuser, New York (2013)
- [11] Jackson J., Meyer C., Nishimura D., Macovski A. Selection of a convolution function for Fourier inversion using gridding (computerised tomography application). IEEE Trans. Med. Imag. 10(3), 473–478 (1991)
- [12] Kueng R., Gross D. RIPless compressed sensing from anisotropic measurements. Linear Alg. Appl. 441, 110-123 (2014)
- [13] Lauterbur P. Image formation by induced local interactions: examples employing nuclear magnetic resonance. Nature. 242, 190–191 (1973)
- [14] Lazarus C., Weiss P., Chauffert N., Mauconduit F., El Gueddari L., Destrieux C., Zemmoura I., Vignaud A., Ciuciu P. SPARKLING: variable-density k-space filling curves for accelerated T2*-weighted MRI. Magn. Reson. Med. 81(6), 3643–3661 (2019)
- [15] Tropp J. User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12(4), 389–434 (2012)