Motivated by the increasing popularity and importance of large-scale tra...
We present a partially personalized formulation of Federated Learning (F...
This note focuses on a simple approach to the unified analysis of SGD-ty...
Distributed optimization with open collaboration is a popular field sinc...
Single-call stochastic extragradient methods, like stochastic past
extra...
Byzantine-robustness has been gaining a lot of attention due to the grow...
Stochastic Gradient Descent-Ascent (SGDA) is one of the most prominent
a...
We propose and study a new class of gradient communication mechanisms fo...
In this thesis, we propose new theoretical frameworks for the analysis o...
The Stochastic Extragradient (SEG) method is one of the most popular
alg...
Extragradient method (EG) Korpelevich [1976] is one of the most popular
...
First proposed by Seide (2014) as a heuristic, error feedback (EF) is a ...
Some of the hardest problems in deep learning can be solved with the com...
Thanks to their practical efficiency and random nature of the data,
stoc...
Training deep neural networks on large datasets can often be accelerated...
We develop and analyze MARINA: a new communication efficient method for
...
Motivated by recent increased interest in optimization algorithms for
no...
We present a unified framework for analyzing local SGD methods in the co...
In this paper, we propose a unified analysis of variants of distributed ...
In this paper, we propose a new accelerated stochastic first-order metho...
In this paper we introduce a unified analysis of a large family of varia...
Training very large machine learning models requires a distributed compu...
We consider smooth stochastic convex optimization problems in the contex...
We consider an unconstrained problem of minimization of a smooth convex
...