Motivated by the increasing popularity and importance of large-scale tra...
We present a partially personalized formulation of Federated Learning (F...
We propose Adaptive Compressed Gradient Descent (AdaCGD) - a novel
optim...
Despite their high computation and communication costs, Newton-type meth...
Recent advances in distributed optimization have shown that Newton-type
...
Inspired by recent work of Islamov et al (2021), we propose a family of
...
We develop several new communication-efficient second-order methods for
...