
Newtonbased maximum likelihood estimation in nonlinear state space models
Maximum likelihood (ML) estimation using Newton's method in nonlinear st...
02/12/2015 ∙ by Manon Kok, et al. ∙ 0 ∙ shareread it

OverSketched Newton: Fast Convex Optimization for Serverless Systems
Motivated by recent developments in serverless systems for largescale m...
03/21/2019 ∙ by Vipul Gupta, et al. ∙ 0 ∙ shareread it

A Fast Algorithm for Maximum Likelihood Estimation of Mixture Proportions Using Sequential Quadratic Programming
Maximum likelihood estimation of mixture proportions has a long history ...
06/04/2018 ∙ by Youngseok Kim, et al. ∙ 0 ∙ shareread it

Faster independent component analysis by preconditioning with Hessian approximations
Independent Component Analysis (ICA) is a technique for unsupervised exp...
06/25/2017 ∙ by Pierre Ablin, et al. ∙ 0 ∙ shareread it

TrustRegion Algorithms for Training Responses: Machine Learning Methods Using Indefinite Hessian Approximations
Machine learning (ML) problems are often posed as highly nonlinear and n...
07/01/2018 ∙ by Jennifer B. Erway, et al. ∙ 0 ∙ shareread it

Distance Majorization and Its Applications
The problem of minimizing a continuously differentiable convex function ...
11/16/2012 ∙ by Eric C. Chi, et al. ∙ 0 ∙ shareread it

A GaussNewton Method for Markov Decision Processes
Approximate Newton methods are a standard optimization tool which aim to...
07/29/2015 ∙ by Thomas Furmston, et al. ∙ 0 ∙ shareread it
Accelerating likelihood optimization for ICA on real signals
We study optimization methods for solving the maximum likelihood formulation of independent component analysis (ICA). We consider both the the problem constrained to white signals and the unconstrained problem. The Hessian of the objective function is costly to compute, which renders Newton's method impractical for large data sets. Many algorithms proposed in the literature can be rewritten as quasiNewton methods, for which the Hessian approximation is cheap to compute. These algorithms are very fast on simulated data where the linear mixture assumption really holds. However, on real signals, we observe that their rate of convergence can be severely impaired. In this paper, we investigate the origins of this behavior, and show that the recently proposed Preconditioned ICA for Real Data (Picard) algorithm overcomes this issue on both constrained and unconstrained problems.
READ FULL TEXT