Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning

02/18/2014
by   Julien Mairal, et al.
0

Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function. These upper bounds are tight at the current estimate, and each iteration monotonically drives the objective function downhill. Such a simple principle is widely applicable and has been very popular in various scientific fields, especially in signal processing and statistics. In this paper, we propose an incremental majorization-minimization scheme for minimizing a large sum of continuous functions, a problem of utmost importance in machine learning. We present convergence guarantees for non-convex and convex optimization when the upper bounds approximate the objective up to a smooth error; we call such upper bounds "first-order surrogate functions". More precisely, we study asymptotic stationary point guarantees for non-convex problems, and for convex ones, we provide convergence rates for the expected objective function value. We apply our scheme to composite optimization and obtain a new incremental proximal gradient algorithm with linear convergence rate for strongly convex functions. In our experiments, we show that our method is competitive with the state of the art for solving machine learning problems such as logistic regression when the number of training samples is large enough, and we demonstrate its usefulness for sparse estimation with non-convex penalties.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2013

Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization

Majorization-minimization algorithms consist of iteratively minimizing a...
research
05/11/2018

Randomized Smoothing SVRG for Large-scale Nonsmooth Convex Optimization

In this paper, we consider the problem of minimizing the average of a la...
research
06/25/2015

Generalized Majorization-Minimization

Non-convex optimization is ubiquitous in machine learning. The Majorizat...
research
12/11/2015

A Unified Approach to Error Bounds for Structured Convex Optimization Problems

Error bounds, which refer to inequalities that bound the distance of vec...
research
01/05/2023

Restarts subject to approximate sharpness: A parameter-free and optimal scheme for first-order methods

Sharpness is an almost generic assumption in continuous optimization tha...
research
07/19/2021

High-Dimensional Simulation Optimization via Brownian Fields and Sparse Grids

High-dimensional simulation optimization is notoriously challenging. We ...
research
06/05/2019

Data Sketching for Faster Training of Machine Learning Models

Many machine learning problems reduce to the problem of minimizing an ex...

Please sign up or login with your details

Forgot password? Click here to reset