
Proximal QuasiNewton Methods for Regularized Convex Optimization with Linear and Accelerated Sublinear Convergence Rates
In [19], a general, inexact, efficient proximal quasiNewton algorithm f...
read it

Perspective Maximum LikelihoodType Estimation via Proximal Decomposition
We introduce an optimization model for maximum likelihoodtype estimatio...
read it

Using gradient directions to get global convergence of Newtontype methods
The renewed interest in Steepest Descent (SD) methods following the work...
read it

Bahadur efficiency of the maximum likelihood estimator and onestep estimator for quasiarithmetic means of the Cauchy distribution
Some quasiarithmetic means of random variables easily give unbiased str...
read it

Instability, Computational Efficiency and Statistical Accuracy
Many statistical estimators are defined as the fixed point of a datadep...
read it

Maximum likelihood (ML) estimators for scaled mutation parameters with a strand symmetric mutation model in equilibrium
With the multiallelic parentindependent mutationdrift model, the equil...
read it

Proximal QuasiNewton for Computationally Intensive L1regularized Mestimators
We consider the class of optimization problems arising from computationa...
read it
OneStep Estimation With Scaled Proximal Methods
We study statistical estimators computed using iterative optimization methods that are not run until completion. Classical results on maximum likelihood estimators (MLEs) assert that a onestep estimator (OSE), in which a single NewtonRaphson iteration is performed from a starting point with certain properties, is asymptotically equivalent to the MLE. We further develop these earlystopping results by deriving properties of onestep estimators defined by a single iteration of scaled proximal methods. Our main results show the asymptotic equivalence of the likelihoodbased estimator and various onestep estimators defined by scaled proximal methods. By interpreting OSEs as the last of a sequence of iterates, our results provide insight on scaling numerical tolerance with sample size. Our setting contains scaled proximal gradient descent applied to certain composite models as a special case, making our results applicable to many problems of practical interest. Additionally, we provide support for the utility of the scaled Moreau envelope as a statistical smoother by interpreting scaled proximal descent as a quasiNewton method applied to the scaled Moreau envelope.
READ FULL TEXT
Comments
There are no comments yet.