Estimation for High-Dimensional Multi-Layer Generalized Linear Model – Part I: The Exact MMSE Estimator
This two-part work considers the minimum means square error (MMSE) estimation problem for a high dimensional multi-layer generalized linear model (ML-GLM), which resembles a feed-forward fully connected deep learning network in that each of its layer mixes up the random input with a known weighting matrix and activates the results via non-linear functions, except that the activation here is stochastic and following some random distribution. Part I of the work focuses on the exact MMSE estimator, whose implementation is long known infeasible. For this exact estimator, an asymptotic analysis on the performance is carried out using a new replica method that is refined from certain aspects. A decoupling principle is then established, suggesting that, in terms of joint input-and-estimate distribution, the original estimation problem of multiple-input multiple-output is indeed identical to a simple single-input single-output one subjected to additive white Gaussian noise (AWGN) only. The variance of the AWGN is further shown to be determined by some coupled equations, whose dependency on the weighting and activation is given explicitly and analytically. Comparing to existing results, this paper is the first to offer a decoupling principle for the ML-GLM estimation problem. To further address the implementation issue of an exact solution, Part II proposes an approximate estimator, ML-GAMP, whose per-iteration complexity is as low as GAMP, while its asymptotic MSE (if converged) is as optimal as the exact MMSE estimator.
READ FULL TEXT