The approximate message passing (AMP) algorithm ,
first proposed by Donoho et al. in the field of compressed
sensing (CS), is one state-of-the-art
algorithm for Bayesian inference over the standard linear model (SLM)[3, 4, 5, 6, 7, 8, 9, 10, 11].
Since its first publication, much effort has been
done to extend AMP[12, 13, 14, 15, 16, 17, 18, 19].
In  , vector AMP (VAMP) was proposed by extending
the measurement matrix from i.i.d. sub Gaussian to right-rotationally
invariant matrix. For partial discrete Fourier transform (DFT) matrix,
a variant of AMP was proposed proposed orthogonal
AMP (OAMP) using de-correlated linear estimator and divergence-free
, vector AMP (VAMP) was proposed by extending the measurement matrix from i.i.d. sub Gaussian to right-rotationally invariant matrix. For partial discrete Fourier transform (DFT) matrix, a variant of AMP was proposed. For unitarily-invariant matrices, the authors in 
proposed orthogonal AMP (OAMP) using de-correlated linear estimator and divergence-free nonlinear estimator.
However, in some applications such as quantized CS , linear classification , phase retrieval , etc., the measurements are nonlinear transform of the input signal, to which SLM no longer applies. To this end, Rangan extended AMP to generalized approximate message passing (GAMP) [23, 24] for generalized linear models (GLM) so that it can efficiently recover the signal of interest from nonlinear measurements. Recently, both VAMP  and turbo CS  have been extended to handle GLM problems [14, 18, 19].
In this letter, a unified Bayesian inference framework for GLM inference is presented whereby the original GLM problem is iteratively reduced to a sequence of SLM problems. Under such framework, a variety of inference methods for SLM can be easily extended to GLM. As specific instances, we extend AMP, VAMP, and sparse Bayesian learning (SBL) [25, 26] to GLM under this framework and obtain three GLM inference algorithms termed as Gr-AMP, Gr-VAMP, and Gr-SBL, respectively. It is proved that Gr-AMP is equivalent to GAMP . Note that although both AMP and VAMP have been extended to GLM in  and , each of them was derived from a different perspective tailored to one specific SLM algorithm only. Under the proposed framework, however, we unveil that they actually share a common rule and more importantly, this rule also suggests novel extensions for some other SLM algorithms, e.g., SBL. Thus the main contribution of this letter is providing a simple and unified Bayesian framework for the understanding and extension of SLM algorithms to GLM. Numerical results for 1-bit quantized CS demonstrate the effectiveness of this framework.
Ii Problem Description
Consider the system model of GLM shown in Fig. 1. The signal is generated following a prior distribution . Then,
passes through a linear transform, where is a known matrix and the measurement ratio is . The observation vector is obtained through a component-wise random mapping, which is described by a factorized conditional distribution
The goal of GLM inference is to compute the minimum mean square error (MMSE) estimate of , i.e., , where the expectation is taken with respect to (w.r.t.) the posterior distribution , where denotes identity up to a normalization constant. In this letter, and are assumed known. Extensions to scenarios with unknown or are possible [6, 4, 27]. Note that the well-known SLM is a special case of GLM which reads
where is a Gaussian noise vector with mean and covariance matrix , i.e., where
denotes identity matrix of dimension.
Iii Unified Inference Framework
In this section we present a unified Bayesian inference framework, shown in Fig. 2, for the GLM problem. It consists of two modules: an SLM inference module A and a component-wise MMSE module B. The messages and (defined in (11), (10)) form the outputs of module A and inputs to module B, while the messages and (defined in (7), (6)) form the outputs of module B and inputs to module A. To perform GLM inference, we alternate between the two modules in a turbo manner. Assuming the prior distribution of as Gaussian with mean
and variance, module B performs component-wise MMSE estimate and outputs and . Module A then performs SLM inference over a pseudo-SLM and updates the outputs and to module B. This process continues iteratively until convergence or some predefined stopping criteria is satisfied, where one iteration refers to one pass of messages from B to A and then back to B.
Specifically, in the -th iteration, the inputs and to module B are treated as the prior mean and variance of , respectively. Assuming the prior is Gaussian, the posterior mean and variance of can be computed in module B by component-wise MMSE estimate, i.e.,
where the expectation is taken w.r.t. the posterior distribution
According to the turbo principle , the posterior mean and variance cannot be directly used as the output message since they incorporate information of the input messages. Instead, the so-called extrinsic mean and variance of are calculated by excluding the contribution of input messages and , which are given as 
For more details of the turbo principle and extrinsic information, please refer to [28, 30]. Then in module A, similar as , can be viewed as the pseudo-observation of an equivalent additive Gaussian channel with as input, i.e.,
where . In matrix form, (8) can be rewritten as an SLM form
where . The underlying theme in (8) is to approximate the estimation error () as Gaussian noise whose mean is zero and variance is the extrinsic variance . Such idea was first used (to our knowledge) in  (formula (47) in ). Rigorous theoretical analysis on this will be future work.
As a result, the original GLM problem reduces to an SLM problem so that one can estimate by performing SLM inference over (9) to obtain the posterior estimates. Specially, if GLM itself reduces to SLM, then from (6) and (7), we obtain , , which is exactly the original SLM problem. Given the posterior mean and variance estimates of , the posterior mean and variance of can be obtained using the constraint . Note that the specific method to calculate and may vary for different SLM inference methods, as shown in Section IV. Given and , the extrinsic mean and variance of can be computed as 
This process continues until convergence or some predefined stopping criteria is satisfied. The corresponding algorithm framework is shown in Algorithm 1. Note that in step 3), one or more iterations can be performed for iterative SLM methods. Under such framework, a variety of inference methods for SLM can be easily extended to GLM. As specific instances, we will show in Section IV how AMP, VAMP, and SBL can be extended to GLM within this framework.
Iv Extending AMP, VAMP, and SBL to GLM
Iv-a From AMP to Gr-AMP and GAMP
First review AMP for SLM. Many variants of AMP exist and in this letter we adopt the Bayesian version in  and . Assuming the prior distribution , the factor graph for SLM (2) is shown in Fig. 3 (a), where
denotes the Gaussian distribution. Denote by the message from to in the -th iteration, with mean and variance of being and . Then the message from to is , with mean and variance of being and , respectively. Define
By the central limit theorem and neglecting high order terms, the resultant AMP is shown in Algorithm2. Note that the approximation in AMP is asymptotically exact in large system limit, i.e., with [23, 4]. The expectation operation in the posterior mean and variance in line (D3) and (D4) are w.r.t. the posterior distribution For more details, refer to [23, 4].
1) Initialization: ,, ;
2) Variable node update: For
3) Factor node update: For
4) Set and proceed to step 2) until .
Then we show how to extend AMP to GLM under the unified framework in Section III. As shown in Algorithm 1, the key lies in calculating the extrinsic mean and extrinsic variance . For AMP, fortunately, these two values have already been calculated, as shown in Lemma 1.
After performing AMP in module A, the output extrinsic mean and variance to module B can be computed as
where , are the results of AMP in line (D6) and (D5) of Algorithm 2, respectively.
From (10) and (11), the posterior mean and variance of are needed to calculate and . However, in original AMP, they are not explicitly given. To this end, we present an alternative factor graph for SLM in Fig. 3 (b), where denotes the Dirac function and denotes Gaussian distribution . The two factor graph representations in Fig. 3 are equivalent so that the message from to is precisely . Thus, the message from to is Recall the definitions of and in (13). As in AMP, according to the central limit theorem,
can be approximated as Gaussian random variable whose mean and variance areand , i.e., . Since the message from to is , the posterior distribution of can be calculated as where
Note that the proposed Gr-AMP in Algorithm 3 can be viewed as a general form of the well-known GAMP . In particular, if , Gr-AMP becomes equivalent to the original GAMP. To prove this, in step 2) of Algorithm 3, we first obtain the equivalent noise variance and observation from (6) and (7) with and . Then, in step 3) of Algorithm 3, one iteration of AMP is performed by simply replacing , in Algorithm 2 with , . After some algebra and the definitions of two intermediate variables and in (17) and (18), the -th iteration of the proposed Gr-AMP in Algorithm 3 with becomes
Iv-B From VAMP and SBL to Gr-VAMP and Gr-SBL
For VAMP and SBL, the values of and are not already calculated as in AMP. However, the posterior mean and covariance matrix of , denoted as and , respectively, can be obtained in the linear MMSE (LMMSE) operation of VAMP and SBL (refer to , , and ). Thus, the posterior mean and variance of are calculated as , , where denotes the trace of matrix . Then, we obtain and variance from (10) and (11). The resulting generalized VAMP (Gr-VAMP) and generalized SBL (Gr-SBL) are shown in Algorithm 4.
Iv-C Applications to 1-bit Compressed Sensing
In this subsection, we evaluate the performances of Gr-AMP, Gr-VAMP, and Gr-SBL for 1-bit CS. Assume that the sparse signal follows i.i.d. Bernoulli-Gaussian distribution . The 1-bit quantized measurements are obtained by , where is the component-wise sign of each element. We consider the case for sparse ratio . The signal to noise ratio (SNR) is set to be 50 dB. To test the performances for general matrix , ill-conditioned matrices with various condition numbers are considered following . Specifically,
is constructed from the singular value decomposition (SVD), where orthogonal matrices and were uniformly drawn w.r.t. Harr measure. The singular values were chosen to achieve a desired condition number , where and are the maximum and minimum singular values of , respectively. As in , the recovery performance is assessed using the debiased normalized mean square error (dNMSE) in decibels, i.e.,. In all simulations, and the results are averaged over 100 realizations. The results of GAMP and the algorithm in , denoted as GVAMP, are also given for comparison.
Fig. 4 (a) and (b) show the results of dNMSE versus the number of algorithm iterations when the condition number , respectively. When , all algorithms converge to the same dNMSE value except Gr-SBL which has slightly worse performance. When , however, both Gr-AMP and GAMP fail while other algorithms still work well. In Fig. 4 (c), the dNMSE is evaluated for various condition number
ranging from 1 (i.e., row-orthogonal matrix) to
(i.e., highly ill-conditioned matrix). It is seen that as the condition number increases, the recovery performances degrade smoothly for Gr-VAMP, GVAMP, and Gr-SBL while both Gr-AMP and GAMP diverge for even mild condition number.
In this letter a unified Bayesian inference framework for GLM is presented whereby a variety of SLM algorithms can be easily extended to handle the GLM problem. Specific instances elucidated under this framework are the extensions of AMP, VAMP, and SBL, which unveils intimate connections between some established algorithms and leads to novel extensions for some other SLM algorithms. Numerical results for 1-bit CS demonstrate the effectiveness of this framework.
The authors sincerely thank P. Schniter for valuable discussions and the anonymous reviewers for valuable comments.
-a Details of Gr-VAMP
The vector approximate message passing (VAMP) algorithm is a robust version of the well-known AMP for standard linear model (SLM). Define the LMMSE estimator and component-wise denoiser function as
where the expectation in calculating the posterior mean is with respect to (w.r.t.) the distribution
Then, the VAMP algorithm is shown in Algorithm 5. Note that here we only show the MMSE version of VAMP. However, it also applies to the SVD version of VAMP. For more details, please refer to . It should also be noted that the denoising step and the LMMSE step in Algorithm 5 can be exchanged in practical implementations.
Note that line 12 of Algorithm 5 represents
where denotes the trace of matrix . Define
As shown in the unified framework in Algorithm 1, the key lies in calculating the extrinsic values of and . To this end, from (10) and (11), we should first calculate the posterior mean vector and variance vector of from the results of VAMP. As shown in Algorithm 5, the posterior mean and covariance matrix of can be obtained via the LMMSE step of VAMP. Since , the posterior mean and covariance matrix of are
As a result, variance vector of can be calculated as the diagonal vector of covariance matrix , i.e.,
Note that for AMP, the covariance matrix of is unavailable, so that we could not directly calculate and as (29) and (31), respectively. In fact, only the diagonal elements of the covariance matrix of are available in AMP.
Moreover, in VAMP, the variances values are averaged before being further processed. To be consistent with the implementation of VAMP, the variance vector of are also set to be the average value of the diagonal elements of covariance matrix , i.e.,
-B Details of Gr-SBL
The sparse Bayesian learning (SBL) algorithm is a well-known Bayesian inference method for compressed sensing for SLM (2). In SBL, the prior distribution of signal is assumed to be i.i.d Gaussian, i.e.,
where are non-negative hyper-parameters controlling the sparsity of the signal
and they follow Gamma distributions
where is the Gamma function.
Then, using the expectation maximization (EM) method, the conventional SBL algorithm is shown in Algorithm6. Here we assumed knowledge of noise variance. In fact, SBL can also handle the case with unknown noise variance. For more details, please refer to .
It is seen that in SBL, line 4 and 5 of Algorithm 6 can be recognized as the LMMSE estimate of under likelihood and Gaussian prior given . Thus, the posterior mean and covariance matrix of can be calculated as and in Algorithm 6, respectively.
Under the unified framework in Algorithm 1, and similar to the derivation of Gr-VAMP, we first obtain the posterior mean and covariance matrix of as
Then, the variance vector of can be calculated to be the average of the diagonal vector of covariance matrix , i.e.,
Note that the average in (37) is not essential and we can also simply choose the diagonal elements of , i.e.,