On the Convergence of Bound Optimization Algorithms

10/19/2012
by   Ruslan R. Salakhutdinov, et al.
0

Many practitioners who use the EM algorithm complain that it is sometimes slow. When does this happen, and what can be done about it? In this paper, we study the general class of bound optimization algorithms - including Expectation-Maximization, Iterative Scaling and CCCP - and their relationship to direct optimization algorithms such as gradient-based methods for parameter learning. We derive a general relationship between the updates performed by bound optimization methods and those of gradient and second-order methods and identify analytic conditions under which bound optimization algorithms exhibit quasi-Newton behavior, and conditions under which they possess poor, first-order convergence. Based on this analysis, we consider several specific algorithms, interpret and analyze their convergence properties and provide some recipes for preprocessing input to these algorithms to yield faster convergence behavior. We report empirical results supporting our analysis and showing that simple data preprocessing can result in dramatically improved performance of bound optimizers in practice.

READ FULL TEXT

page 1

page 5

page 7

research
02/24/2023

Asymptotic convergence of iterative optimization algorithms

This paper introduces a general framework for iterative optimization alg...
research
06/23/2020

A Dynamical Systems Approach for Convergence of the Bayesian EM Algorithm

Out of the recent advances in systems and control (S&C)-based analysis o...
research
01/09/2019

Beyond the EM Algorithm: Constrained Optimization Methods for Latent Class Model

Latent class model (LCM), which is a finite mixture of different categor...
research
06/20/2020

Demand Estimation from Sales Transaction Data – Practical Extensions

In this paper we discuss some of the practical limitations of the standa...
research
06/11/2019

Analysis of Optimization Algorithms via Sum-of-Squares

In this work, we introduce a new framework for unifying and systematizin...
research
05/04/2023

The complexity of first-order optimization methods from a metric perspective

A central tool for understanding first-order optimization algorithms is ...
research
04/16/2019

Global Error Bounds and Linear Convergence for Gradient-Based Algorithms for Trend Filtering and ℓ_1-Convex Clustering

We propose a class of first-order gradient-type optimization algorithms ...

Please sign up or login with your details

Forgot password? Click here to reset