A Distributed Cubic-Regularized Newton Method for Smooth Convex Optimization over Networks

07/07/2020
by   César A. Uribe, et al.
10

We propose a distributed, cubic-regularized Newton method for large-scale convex optimization over networks. The proposed method requires only local computations and communications and is suitable for federated learning applications over arbitrary network topologies. We show a O(k^-3) convergence rate when the cost function is convex with Lipschitz gradient and Hessian, with k being the number of iterations. We further provide network-dependent bounds for the communication required in each step of the algorithm. We provide numerical experiments that validate our theoretical results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset