Sparse LMS via Online Linearized Bregman Iteration

10/01/2012 ∙ by Tao Hu, et al. ∙ 0

We propose a version of least-mean-square (LMS) algorithm for sparse system identification. Our algorithm called online linearized Bregman iteration (OLBI) is derived from minimizing the cumulative prediction error squared along with an l1-l2 norm regularizer. By systematically treating the non-differentiable regularizer we arrive at a simple two-step iteration. We demonstrate that OLBI is bias free and compare its operation with existing sparse LMS algorithms by rederiving them in the online convex optimization framework. We perform convergence analysis of OLBI for white input signals and derive theoretical expressions for both the steady state and instantaneous mean square deviations (MSD). We demonstrate numerically that OLBI improves the performance of LMS type algorithms for signals generated from sparse tap weights.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.