Alternating minimization for generalized rank one matrix sensing: Sharp predictions from a random initialization

We consider the problem of estimating the factors of a rank-1 matrix with i.i.d. Gaussian, rank-1 measurements that are nonlinearly transformed and corrupted by noise. Considering two prototypical choices for the nonlinearity, we study the convergence properties of a natural alternating update rule for this nonconvex optimization problem starting from a random initialization. We show sharp convergence guarantees for a sample-split version of the algorithm by deriving a deterministic recursion that is accurate even in high-dimensional problems. Notably, while the infinite-sample population update is uninformative and suggests exact recovery in a single step, the algorithm – and our deterministic prediction – converges geometrically fast from a random initialization. Our sharp, non-asymptotic analysis also exposes several other fine-grained properties of this problem, including how the nonlinearity and noise level affect convergence behavior. On a technical level, our results are enabled by showing that the empirical error recursion can be predicted by our deterministic sequence within fluctuations of the order n^-1/2 when each iteration is run with n observations. Our technique leverages leave-one-out tools originating in the literature on high-dimensional M-estimation and provides an avenue for sharply analyzing higher-order iterative algorithms from a random initialization in other high-dimensional optimization problems with random data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2021

Sharp global convergence guarantees for iterative nonconvex optimization: A Gaussian process perspective

We consider a general class of regression models with normally distribut...
research
12/31/2020

Fast Global Convergence for Low-rank Matrix Recovery via Riemannian Gradient Descent with Random Initialization

In this paper, we propose a new global analysis framework for a class of...
research
02/17/2022

Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

In this work, we study the performance of sub-gradient method (SubGM) on...
research
08/21/2016

A Non-convex One-Pass Framework for Generalized Factorization Machine and Rank-One Matrix Sensing

We develop an efficient alternating framework for learning a generalized...
research
02/06/2016

Recovery guarantee of weighted low-rank approximation via alternating minimization

Many applications require recovering a ground truth low-rank matrix from...
research
04/25/2022

Randomly Initialized Alternating Least Squares: Fast Convergence for Matrix Sensing

We consider the problem of reconstructing rank-one matrices from random ...
research
01/09/2020

A Deterministic Convergence Framework for Exact Non-Convex Phase Retrieval

In this work, we analyze the non-convex framework of Wirtinger Flow (WF)...

Please sign up or login with your details

Forgot password? Click here to reset