Convergence and Complexity of Stochastic Block Majorization-Minimization

01/05/2022
by   Hanbaek Lyu, et al.
0

Stochastic majorization-minimization (SMM) is an online extension of the classical principle of majorization-minimization, which consists of sampling i.i.d. data points from a fixed data distribution and minimizing a recursively defined majorizing surrogate of an objective function. In this paper, we introduce stochastic block majorization-minimization, where the surrogates can now be only block multi-convex and a single block is optimized at a time within a diminishing radius. Relaxing the standard strong convexity requirements for surrogates in SMM, our framework gives wider applicability including online CANDECOMP/PARAFAC (CP) dictionary learning and yields greater computational efficiency especially when the problem dimension is large. We provide an extensive convergence analysis on the proposed algorithm, which we derive under possibly dependent data streams, relaxing the standard i.i.d. assumption on data samples. We show that the proposed algorithm converges almost surely to the set of stationary points of a nonconvex objective under constraints at a rate O((log n)^1+/n^1/2) for the empirical loss function and O((log n)^1+/n^1/4) for the expected loss function, where n denotes the number of data samples processed. Under some additional assumption, the latter convergence rate can be improved to O((log n)^1+/n^1/2). Our results provide first convergence rate bounds for various online matrix and tensor decomposition algorithms under a general Markovian data setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/07/2020

Convergence of block coordinate descent with diminishing radius for nonconvex optimization

Block coordinate descent (BCD), also known as nonlinear Gauss-Seidel, is...
research
06/19/2013

Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization

Majorization-minimization algorithms consist of iteratively minimizing a...
research
09/16/2020

Online nonnegative tensor factorization and CP-dictionary learning for Markovian data

Nonnegative Matrix Factorization (NMF) algorithms are fundamental tools ...
research
01/14/2022

Convergence of an Asynchronous Block-Coordinate Forward-Backward Algorithm for Convex Composite Optimization

In this paper, we study the convergence properties of a randomized block...
research
06/14/2019

Stochastic Proximal AUC Maximization

In this paper we consider the problem of maximizing the Area under the R...
research
04/08/2022

Decision-Dependent Risk Minimization in Geometrically Decaying Dynamic Environments

This paper studies the problem of expected loss minimization given a dat...
research
10/17/2022

On Accelerated Perceptrons and Beyond

The classical Perceptron algorithm of Rosenblatt can be used to find a l...

Please sign up or login with your details

Forgot password? Click here to reset