A General Framework for Analyzing Stochastic Dynamics in Learning Algorithms

06/11/2020
by   Chi-Ning Chou, et al.
0

We present a general framework for analyzing high-probability bounds for stochastic dynamics in learning algorithms. Our framework composes standard techniques such as a stopping time, a martingale concentration and a closed-from solution to give a streamlined three-step recipe with a general and flexible principle to implement it. To demonstrate the power and the flexibility of our framework, we apply the framework on three very different learning problems: stochastic gradient descent for strongly convex functions, streaming principal component analysis and linear bandit with stochastic gradient descent updates. We improve the state of the art bounds on all three dynamics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/13/2018

Tight Analyses for Non-Smooth Stochastic Gradient Descent

Consider the problem of minimizing functions that are Lipschitz and stro...
research
02/09/2021

Berry–Esseen Bounds for Multivariate Nonlinear Statistics with Applications to M-estimators and Stochastic Gradient Descent Algorithms

We establish a Berry–Esseen bound for general multivariate nonlinear sta...
research
05/15/2014

Fast Ridge Regression with Randomized Principal Component Analysis and Gradient Descent

We propose a new two stage algorithm LING for large scale regression pro...
research
11/04/2019

ODE-Inspired Analysis for the Biological Version of Oja's Rule in Solving Streaming PCA

Oja's rule [Oja, Journal of mathematical biology 1982] is a well-known b...
research
12/08/2017

Scaling Limit: Exact and Tractable Analysis of Online Learning Algorithms with Applications to Regularized Regression and PCA

We present a framework for analyzing the exact dynamics of a class of on...
research
09/22/2019

A generalization of regularized dual averaging and its dynamics

Excessive computational cost for learning large data and streaming data ...
research
05/26/2015

Surrogate Functions for Maximizing Precision at the Top

The problem of maximizing precision at the top of a ranked list, often d...

Please sign up or login with your details

Forgot password? Click here to reset