The NLMS algorithm with time-variant optimum stepsize derived from a Bayesian network perspective

11/18/2014
by   Christian Huemmer, et al.
0

In this article, we derive a new stepsize adaptation for the normalized least mean square algorithm (NLMS) by describing the task of linear acoustic echo cancellation from a Bayesian network perspective. Similar to the well-known Kalman filter equations, we model the acoustic wave propagation from the loudspeaker to the microphone by a latent state vector and define a linear observation equation (to model the relation between the state vector and the observation) as well as a linear process equation (to model the temporal progress of the state vector). Based on additional assumptions on the statistics of the random variables in observation and process equation, we apply the expectation-maximization (EM) algorithm to derive an NLMS-like filter adaptation. By exploiting the conditional independence rules for Bayesian networks, we reveal that the resulting EM-NLMS algorithm has a stepsize update equivalent to the optimal-stepsize calculation proposed by Yamamoto and Kitayama in 1982, which has been adopted in many textbooks. As main difference, the instantaneous stepsize value is estimated in the M step of the EM algorithm (instead of being approximated by artificially extending the acoustic echo path). The EM-NLMS algorithm is experimentally verified for synthesized scenarios with both, white noise and male speech as input signal.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/01/2021

Incorporating Transformer and LSTM to Kalman Filter with EM algorithm for state estimation

Kalman Filter requires the true parameters of the model and solves optim...
research
12/16/2020

A Synergistic Kalman- and Deep Postfiltering Approach to Acoustic Echo Cancellation

We introduce a synergistic approach to double-talk robust acoustic echo ...
research
04/15/2016

Bayesian linear regression with Student-t assumptions

As an automatic method of determining model complexity using the trainin...
research
09/05/2012

A Max-Product EM Algorithm for Reconstructing Markov-tree Sparse Signals from Compressive Samples

We propose a Bayesian expectation-maximization (EM) algorithm for recons...
research
02/06/2012

Cramer Rao-Type Bounds for Sparse Bayesian Learning

In this paper, we derive Hybrid, Bayesian and Marginalized Cramér-Rao lo...
research
01/19/2016

Adaptive Image Denoising by Mixture Adaptation

We propose an adaptive learning procedure to learn patch-based image pri...
research
05/24/2023

State estimation for one-dimensional agro-hydrological processes with model mismatch

The importance of accurate soil moisture data for the development of mod...

Please sign up or login with your details

Forgot password? Click here to reset