Automated Worst-Case Performance Analysis of Decentralized Gradient Descent

03/26/2021 ∙ by Sebastien Colla, et al. ∙ 0

We develop a methodology to automatically compute worst-case performance bounds for a class of decentralized optimization algorithms that optimize the average of local functions distributed across a network. We extend to decentralized optimization the recently proposed PEP approach, which allows computing the exact worst-case performance and worst-case instance of centralized algorithms by solving an SDP. We obtain an exact formulation when the network matrix is given and a relaxation for classes of network matrices characterized by their spectral range. We apply our methodology to the distributed (sub)gradient method, obtain a nearly tight worst-case performance bound that significantly improves over the literature, and gain insights into the worst communication networks for a given spectral range.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.