
GeomSPIDEREM: Faster Variance Reduced Stochastic Expectation Maximization for Nonconvex FiniteSum Optimization
The Expectation Maximization (EM) algorithm is a key reference for infer...
read it

On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence
We provide nonasymptotic bounds for the wellknown temporal difference ...
read it

On the Global Convergence of (Fast) Incremental Expectation Maximization Methods
The EM algorithm is one of the most popular algorithm for inference in l...
read it

A Stochastic PathIntegrated Differential EstimatoR Expectation Maximization Algorithm
The Expectation Maximization (EM) algorithm is of key importance for inf...
read it

An Asynchronous Distributed Expectation Maximization Algorithm For Massive Data: The DEM Algorithm
The family of ExpectationMaximization (EM) algorithms provides a genera...
read it

EM algorithms for ICA
Independent component analysis (ICA) is a widely spread data exploration...
read it

Anytime Planning for Decentralized POMDPs using Expectation Maximization
Decentralized POMDPs provide an expressive framework for multiagent seq...
read it
Fast Incremental Expectation Maximization for finitesum optimization: nonasymptotic convergence
Fast Incremental Expectation Maximization (FIEM) is a version of the EM framework for large datasets. In this paper, we first recast FIEM and other incremental EM type algorithms in the Stochastic Approximation within EM framework. Then, we provide nonasymptotic bounds for the convergence in expectation as a function of the number of examples n and of the maximal number of iterations . We propose two strategies for achieving an ϵapproximate stationary point, respectively with = O(n^2/3/ϵ) and = O(√(n)/ϵ^3/2), both strategies relying on a random termination rule before and on a constant step size in the Stochastic Approximation step. Our bounds provide some improvements on the literature. First, they allow to scale as √(n) which is better than n^2/3 which was the best rate obtained so far; it is at the cost of a larger dependence upon the tolerance ϵ, thus making this control relevant for small to medium accuracy with respect to the number of examples n. Second, for the n^2/3rate, the numerical illustrations show that thanks to an optimized choice of the step size and of the bounds in terms of quantities characterizing the optimization problem at hand, our results desig a less conservative choice of the step size and provide a better control of the convergence in expectation.
READ FULL TEXT
Comments
There are no comments yet.