A Dynamical Systems Approach for Convergence of the Bayesian EM Algorithm

06/23/2020
by   Orlando Romero, et al.
26

Out of the recent advances in systems and control (S&C)-based analysis of optimization algorithms, not enough work has been specifically dedicated to machine learning (ML) algorithms and its applications. This paper addresses this gap by illustrating how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based. The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM). Following first principles from dynamical systems stability theory, conditions for convergence of MAP-EM are developed. Furthermore, if additional assumptions are met, we show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach. The convergence guarantees in this paper effectively expand the set of sufficient conditions for EM applications, thereby demonstrating the potential of similar S&C-based convergence analysis of other ML algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2018

Convergence of the Expectation-Maximization Algorithm Through Discrete-Time Lyapunov Stability Theory

In this paper, we propose a dynamical systems perspective of the Expecta...
research
03/03/2019

Analysis of Gradient-Based Expectation-Maximization-Like Algorithms via Integral Quadratic Constraints

The Expectation-Maximization (EM) algorithm is one of the most popular m...
research
10/19/2012

On the Convergence of Bound Optimization Algorithms

Many practitioners who use the EM algorithm complain that it is sometime...
research
02/24/2023

Asymptotic convergence of iterative optimization algorithms

This paper introduces a general framework for iterative optimization alg...
research
12/07/2022

Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast Evasion of Non-Degenerate Saddle Points

Gradient-based first-order convex optimization algorithms find widesprea...
research
11/28/2012

Nature-Inspired Mateheuristic Algorithms: Success and New Challenges

Despite the increasing popularity of metaheuristics, many crucially impo...
research
07/26/2022

Analysis and Design of Quadratic Neural Networks for Regression, Classification, and Lyapunov Control of Dynamical Systems

This paper addresses the analysis and design of quadratic neural network...

Please sign up or login with your details

Forgot password? Click here to reset