On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA

09/27/2018
by   Dan Garber, et al.
0

Non-convex optimization with global convergence guarantees is gaining significant interest in machine learning research in recent years. However, while most works consider either offline settings in which all data is given beforehand, or simple online stochastic i.i.d. settings, very little is known about non-convex optimization for adversarial online learning settings. In this paper we focus on the problem of Online Principal Component Analysis in the regret minimization framework. For this problem, all existing regret minimization algorithms are based on a positive semidefinite convex relaxation, and hence require quadratic memory and SVD computation (either thin of full) on each iteration, which amounts to at least quadratic runtime per iteration. This is in stark contrast to a corresponding stochastic i.i.d. variant of the problem which admits very efficient gradient ascent algorithms that work directly on the natural non-convex formulation of the problem, and hence require only linear memory and linear runtime per iteration. This raises the question: can non-convex online gradient ascent algorithms be shown to minimize regret in online adversarial settings? In this paper we take a step forward towards answering this question. We introduce an adversarially-perturbed spiked-covariance model in which, each data point is assumed to follow a fixed stochastic distribution, but is then perturbed by adversarial noise. We show that in a certain regime of parameters, when the non-convex online gradient ascent algorithm is initialized with a "warm-start" vector, it provably minimizes the regret with high probability. We further discuss the possibility of computing such a "warm-start" vector. Our theoretical findings are supported by empirical experiments on both synthetic and real-world data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2022

Non-convex online learning via algorithmic equivalence

We study an algorithmic equivalence technique between nonconvex gradient...
research
01/20/2013

A Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic Optimization

Linear optimization is many times algorithmically simpler than non-linea...
research
10/13/2020

Regret minimization in stochastic non-convex learning via a proximal-gradient approach

Motivated by applications in machine learning and operations research, w...
research
07/31/2017

Efficient Regret Minimization in Non-Convex Games

We consider regret minimization in repeated games with non-convex loss f...
research
05/22/2020

Online Non-convex Learning for River Pollution Source Identification

In this paper, novel gradient based online learning algorithms are devel...
research
10/17/2018

Learning in Non-convex Games with an Optimization Oracle

We consider adversarial online learning in a non-convex setting under th...
research
06/16/2015

PCA with Gaussian perturbations

Most of machine learning deals with vector parameters. Ideally we would ...

Please sign up or login with your details

Forgot password? Click here to reset