I Introduction
Consider the highdimensional regression problem, where the goal is to estimate a vector from a noisy measurement given by
(1.1) 
Here is a known realvalued measurement matrix, and is the measurement noise. The sampling ratio is denoted by .
Approximate Message Passing (AMP) [1, 2, 3, 4, 5, 6] is a class of lowcomplexity, scalable algorithms to solve the above problem, under suitable assumptions on and . AMP algorithms are derived as Gaussian or quadratic approximations of loopy belief propagation algorithms (e.g., minsum, sumproduct) on the dense factor graph corresponding to (1.1).
Given the observed vector , AMP generates successive estimates of the unknown vector, denoted by for . Set , the allzeros vector. For , AMP computes
(1.2)  
(1.3) 
for an appropriatelychosen sequence of functions . In (1.2) and (1.3), denotes the transpose of , acts componentwise when applied to a vector, and denotes its (weak) derivative. Quantities with a negative index are set to zero throughout the paper. For a demonstration of how the AMP updates (1.2) and (1.3) are derived from a minsumlike message passing algorithm, we refer the reader to [1].
For a Gaussian measurement matrix with entries that are i.i.d. , it was rigorously proven [1, 7] that the performance of AMP can be characterized in the large system limit via a simple scalar iteration called state evolution. This result was extended to the class of matrices with i.i.d. subGaussian entries in [8]. In particular, these results imply that performance measures such as the error and the error converge almost surely to constants that can be computed via the distribution of . (The large system limit is defined as such that , a constant.)
AMP has also been applied to a variety of other highdimensional estimation problems. Some examples are lowrank matrix estimation [9, 10, 11, 12, 13, 14], decoding of sparse superposition codes [15, 16, 17], matrix factorization [18], and estimation in generalized linear and bilinear models [5, 19, 20].
Main Contributions: In this paper, we obtain a nonasymptotic result for the performance of the AMP iteration in (1.2)–(1.3), when the measurement matrix has i.i.d. Gaussian entries . We derive a concentration inequality (Theorem 1) that implies that the probability of deviation between various performance measures (such as ) and their limiting constant values fall exponentially in . Our result provides theoretical support for empirical findings that have demonstrated excellent agreement of AMP performance with state evolution predictions for moderately large dimensions, e.g., of the order of several hundreds [2].
In addition to refining earlier asymptotic results, the concentration inequality in Theorem 1 also clarifies the effect of the iteration number versus the problem dimension . One implication is that the actual AMP performance is close to the state evolution prediction with high probability as long as is of order smaller than . This is particularly relevant for settings where the number of AMP iterations and the problem dimension are both large, e.g., solving the LASSO via AMP [6].
We prove the concentration result in Theorem 1 by analyzing the following general recursion:
(1.4) 
Here, for , the vectors , describe the state of the algorithm, are Lipschitz functions that are separable (act componentwise when applied to vectors), and are scalars that can be computed from the state of the algorithm. The algorithm is initialized with . Further details on the recursion in (1.4), including how the AMP in (1.2)–(1.3) can be obtained as a special case, are given in Section IVA.
For ease of exposition, our analysis will focus on the recursion (1.4) and the problem of highdimensional regression. However, it can be extended to a number of related problems. A symmetric version of the above recursion yields AMP algorithms for problems such as solving the TAP equations in statistical physics [21] and symmetric lowrank matrix estimation [10, 12]. This recursion is defined in terms of a symmetric matrix with entries i.i.d. , and i.i.d. for . (In other words, can be generated as , where has i.i.d. entries.) Then, for , let
(1.5) 
Here, for , the state of the algorithm is represented by a single vector , the function is Lipschitz and separable, and is a constant computed from the state of the algorithm (see [1, Sec. IV] for details). The recursion (1.5) is initialized with a deterministic vector .
Our analysis of the recursion (1.4) can be easily extended to obtain an analogous nonasymptotic result for the symmetric recursion in (1.5). Therefore, for problems of estimating either symmetric or rectangular lowrank matrices in Gaussian noise, our analysis can be used to refine existing asymptotic AMP guarantees (such as those in [9, 10, 11]), by providing a concentration result similar to that in Theorem 1. We also expect that the nonasymptotic analysis can be generalized to the case where the recursion in (1.4) generates matrices rather than vectors, i.e, and (where remains fixed as grow large; see [7] for details). Extending the analysis to this matrix recursion would yield nonasymptotic guarantees for the generalized AMP [5] and AMP for compressed sensing with spatially coupled measurement matrices [22].
Since the publication of the conference version of this paper, the analysis described here has been used in a couple of recent papers: an error exponent for sparse regression codes with AMP decoding was obtained in [23], and a nonasymptotic result for AMP with nonseparable denoisers was given in [24].
Ia Assumptions
Before proceeding, we state the assumptions on the model (1.1) and the functions used to define the AMP. In what follows, are generic positive constants whose values are not exactly specified but do not depend on . We use the notation to denote the set .
Measurement Matrix: The entries of measurement matrix are i.i.d. .
Signal: The entries of the signal
are i.i.d. according to a subGaussian distribution
. We recall that a zeromean random variable
is subGaussian if there exist positive constants such that , [25].Measurement Noise: The entries of the measurement noise vector are i.i.d. according to some subGaussian distribution with mean and for . The subGaussian assumption implies that, for ,
(1.6) 
for some constants [25].
The Functions : The denoising functions, , in (1.3) are Lipschitz continuous for each , and are therefore weakly differentiable. The weak derivative, denoted by , is assumed to be differentiable, except possibly at a finite number of points, with bounded derivative everywhere it exists. Allowing to be nondifferentiable at a finite number of points covers denoising functions like softthresholding which is used in applications such as the LASSO [6].
Functions defined with scalar inputs are assumed to act componentwise when applied to vectors.
The remainder of the paper is organized as follows. In Section II we review state evolution, the formalism predicting the performance of AMP, and discuss how knowledge of the signal distribution and the noise distribution can help choose good denoising functions . However, we emphasize that our result holds for the AMP with any choice of satisfying the above condition, even those that do not depend on and . In Section IIA, we introduce a stopping criterion for termination of the AMP. In Section III, we give our main result (Theorem 1) which proves that the performance of AMP can be characterized accurately via state evolution for large but finite sample size . Section IV gives the proof of Theorem 1. The proof is based on two technical lemmas: Lemmas 3 and 5. The proof of Lemma 5 is long; we therefore give a brief summary of the main ideas in Section IVF and then the full proof in Section V. In the appendices, we list a number of concentration inequalities that are used in the proof of Lemma 5. Some of these, such as the concentration inequality for the sum of pseudoLipschitz functions of i.i.d. subGaussian random variables (Lemma B.4), may be of independent interest.
Ii State Evolution and the Choice of
In this section, we briefly describe state evolution, the formalism that predicts the behavior of AMP in the large system limit. We only review the main points followed by a few examples; a more detailed treatment can be found in [1, 4].
Given , let Let , where . Iteratively define the quantities and as
(2.1) 
where and are independent random variables.
The AMP update (1.3) is underpinned by the following key property of the vector : for large , is approximately distributed as , where is an i.i.d. random vector independent of . In light of this property, a natural way to generate from the “effective observation” is via the conditional expectation:
(2.2) 
i.e., is the MMSE estimate of given the noisy observation . Thus if is known, the Bayes optimal choice for is the conditional expectation in (2.2).
In the definition of the “modified residual” , the third term on the RHS of (1.2) is crucial to ensure that the effective observation has the above distributional property. For intuition about the role of this ‘Onsager term’, the reader is referred to [1, Section IC].
We review two examples to illustrate how full or partial knowledge of can guide the choice of the denoising function . In the first example, suppose we know that each element of is chosen uniformly at random from the set . Computing the conditional expectation in (2.2) with this , we obtain [1]. The constants are determined iteratively from the state evolution equations (2.1).
As a second example, consider the compressed sensing problem, where , and is such that . The parameter determines the sparsity of . For this problem, the authors in [2, 4] suggested the choice , where the softthresholding function is defined as
The threshold at step is set to , where is a tunable constant and is determined by (2.1
), making the threshold value proportional to the standard deviation of the noise in the effective observation. However, computing
using (2.1) requires knowledge of . In the absence of such knowledge, we can estimate by : our concentration result (Lemma 5(e)) shows that this approximation is increasingly accurate as grows large. To fix , one could run the AMP with several different values of , and choose the one that gives the smallest value of for large .We note that in each of the above examples is Lipschitz, and its derivative satisfies the assumption stated in Section IA.
Iia Stopping Criterion
To obtain a concentration result that clearly highlights the dependence on the iteration and the dimension , we include a stopping criterion for the AMP algorithm. The intuition is that the AMP algorithm can be terminated once the expected squared error of the estimates (as predicted by state evolution equations in (2.1)) is either very small or stops improving appreciably.
For Bayesoptimal AMP where the denoising function is the conditional expectation given in (2.2), the stopping criterion is as follows. Terminate the algorithm at the first iteration for which either
(2.3) 
where and are prespecified constants. Recall from (2.1) that is expected squared error in the estimate. Therefore, for suitably chosen values of , the AMP will terminate when the expected squared error is either small enough, or has not significantly decreased from the previous iteration.
For the general case where is not the Bayesoptimal choice, the stopping criterion is: terminate the algorithm at the first iteration for which at least one of the following is true:
(2.4) 
where are prespecified constants, and are defined in (4.19). The precise definitions of the scalars are postponed to Sec. IVB as a few other definitions are needed first. For now, it suffices to note that are measures of how close and are to and , respectively. Indeed, for the Bayesoptimal case, we show in Sec IVC that
Let be the first value of for which at least one of the conditions is met. Then the algorithm is run only for . It follows that for ,
(2.5) 
In the rest of the paper, we will use the stopping criterion to implicitly assume that are bounded below by positive constants.
Iii Main Result
Our result, Theorem 1, is a concentration inequality for pseudoLipschitz
(PL) loss functions. As defined in
[1], a function is pseudoLipschitz (of order ) if there exists a constant such that for all , where denotes the Euclidean norm.Theorem 1.
With the assumptions listed in Section IA, the following holds for any (order) pseudoLipschitz function , and , where is the first iteration for which the stopping criterion in (2.4) is satisfied.
(3.1) 
In the expectation in (3.1), and are independent, and is given by (2.1). The constants are given by , where are universal constants (not depending on , , or ) that are not explicitly specified.
The probability in (3.1) is with respect to the product measure on the space of the measurement matrix , signal , and the noise .
Remarks:
1. By considering the pseudoLipschitz function , Theorem 1 proves that state evolution tracks the mean square error of the AMP estimates with exponentially small probability of error in the sample size . Indeed, for all ,
(3.2) 
Similarly, taking the theorem implies that the normalized error is concentrated around .
2. Asymptotic convergence results of the kind given in [1, 6] are implied by Theorem 1. Indeed, from Theorem 1, the sum
is finite for any fixed . Therefore the BorelCantelli lemma implies that for any fixed :
3. Theorem 1 also refines the asymptotic convergence result by specifying how large can be (compared to the dimension ) for the state evolution predictions to be meaningful. Indeed, if we require the bound in (3.1) to go to zero with growing , we need as . Using the expression for from the theorem then yields .
Thus, when the AMP is run for a growing number of iterations, the state evolution predictions are guaranteed to be valid until iteration if the problem dimension grows faster than exponentially in . Though the constants in the bound have not been optimized, we believe that the dependence of these constants on is inevitable in any inductionbased proof of the result. An open question is whether this relationship between and is fundamental, or a different analysis of the AMP can yield constants which allow to grow faster with .
4. As mentioned in the introduction, we expect that nonasymptotic results similar to Theorem 1 can be obtained for other estimation problems (with Gaussian matrices) for which rigorous asymptotic results have been proven for AMP. Examples of such problems include lowrank matrix estimation [9, 10, 11], robust highdimensional Mestimation [26], AMP with spatially coupled matrices [22], and generalized AMP [7, 27].
Iv Proof of Theorem 1
We first lay down the notation that will be used in the proof, then state two technical lemmas (Lemmas 3 and 5) and use them to prove Theorem 1.
Iva Notation and Definitions
For consistency and ease of comparison, we use notation similar to [1]. To prove the technical lemmas, we use the general recursion in (1.4), which we write in a slightly different form below. Given , , define the column vectors and for recursively as follows, starting with initial condition :
(4.1) 
where the scalars and are defined as
(4.2) 
In (4.2), the derivatives of and are with respect to the first argument. The functions are assumed to be Lipschitz continuous for , hence the weak derivatives and exist. Further, and are each assumed to be differentiable, except possibly at a finite number of points, with bounded derivative everywhere it exists.
Let with . We let and assume that there exist constants such that
(4.3) 
Define the state evolution scalars and for the general recursion as follows.
(4.4) 
where , and are independent random variables. We assume that both and are strictly positive.
The AMP algorithm is a special case of the general recursion in (4.1) and (4.2). Indeed, the AMP can be recovered by defining the following vectors recursively for , starting with and .
(4.5) 
It can be verified that these vectors satisfy (4.1) and (4.2) with
(4.6) 
Using this choice of in (4.4) yields the expressions for given in (2.1). Using (4.6) in (4.2), we also see that for AMP,
(4.7) 
Recall that is the vector we would like to recover and is the measurement noise. The vector is the noise in the effective observation , while is the error in the estimate . The proof will show that and are approximately i.i.d. , while
is approximately i.i.d. with zero mean and variance
.For the analysis, we work with the general recursion given by (4.1) and (4.2). Notice from (4.1) that for all ,
(4.8) 
Thus we have the matrix equations and where
(4.9) 
The notation is used to denote a matrix with columns . Note that and are the allzero vector. Additionally define the matrices
(4.10) 
Note that , , , and are allzero vectors. Using the above we see that and
We use the notation and to denote the projection of and onto the column space of and , respectively. Let
(4.11) 
be the coefficient vectors of these projections, i.e.,
(4.12) 
The projections of and onto the orthogonal complements of and , respectively, are denoted by
(4.13) 
Lemma 5 shows that for large , the entries of and are concentrated around constants. We now specify these constants and provide some intuition about their values in the special case where the denoising function in the AMP recursion is the Bayesoptimal choice, as in (2.2).
IvB Concentrating Values
Let and each be sequences of of zeromean jointly Gaussian random variables whose covariance is defined recursively as follows. For ,
(4.14) 
where
(4.15) 
where and are independent random variables. In the above, we take , the initial condition. Note that and , thus .
Define matrices for such that
(4.16) 
With these definitions, the concentrating values for and (if and are invertible) are
(4.17) 
with
(4.18) 
Let and , and for define
(4.19) 
Finally, we define the concentrating values for and as
(4.20) 
Since and are assumed to be Lipschitz continuous, the derivatives and are bounded for . Therefore defined in (4.2) and defined in (4.20) are also bounded. For the AMP recursion, it follows from (4.6) that
(4.21) 
Lemma 1.
If and are bounded below by some positive constants (say and , respectively) for , then the matrices and defined in (4.16) are invertible for .
Proof:
We prove the result using induction. Note that and are both strictly positive by assumption and hence invertible. Assume that for some , and are invertible. The matrix can be written as
where , , and defined in (4.18). By the block inversion formula, is invertible if and the Schur complement are both invertible. By the induction hypothesis is invertible, and
(4.22) 
Hence is invertible. Showing that is invertible is very similar. ∎
We note that the stopping criterion ensures that and are invertible for all that are relevant to Theorem 1.
IvC Bayesoptimal AMP
The concentrating constants in (4.14)–(4.19) have simple representations in the special case where the denoising function is chosen to be Bayesoptimal, i.e., the conditional expectation of given the noisy observation , as in (2.2). In this case:

It can be shown that in (4.15) equals for . This is done in two steps. First verify that the following Markov property holds for the jointly Gaussian with covariance given by (4.14):
We then use the above in the definition of (with given by (4.6)), and apply the orthogonality principle to show that for .

From the orthogonality principle, it also follows that for ,
where .