
-
Recent Theoretical Advances in Non-Convex Optimization
Motivated by recent increased interest in optimization algorithms for no...
read it
-
Local SGD: Unified Theory and New Efficient Methods
We present a unified framework for analyzing local SGD methods in the co...
read it
-
Linearly Converging Error Compensated SGD
In this paper, we propose a unified analysis of variants of distributed ...
read it
-
Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping
In this paper, we propose a new accelerated stochastic first-order metho...
read it
-
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
In this paper we introduce a unified analysis of a large family of varia...
read it
-
Distributed Learning with Compressed Gradient Differences
Training very large machine learning models requires a distributed compu...
read it
-
An Accelerated Directional Derivative Method for Smooth Stochastic Convex Optimization
We consider smooth stochastic convex optimization problems in the contex...
read it
-
An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
We consider an unconstrained problem of minimization of a smooth convex ...
read it