Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

07/22/2021
by   Yuyang Deng, et al.
5

In this paper we prove that Local (S)GD (or FedAvg) can optimize two-layer neural networks with Rectified Linear Unit (ReLU) activation function in polynomial time. Despite the established convergence theory of Local SGD on optimizing general smooth functions in communication-efficient distributed optimization, its convergence on non-smooth ReLU networks still eludes full theoretical understanding. The key property used in many Local SGD analysis on smooth function is gradient Lipschitzness, so that the gradient on local models will not drift far away from that on averaged model. However, this decent property does not hold in networks with non-smooth ReLU activation function. We show that, even though ReLU network does not admit gradient Lipschitzness property, the difference between gradients on local models and average model will not change too much, under the dynamics of Local SGD. We validate our theoretical results via extensive experiments. This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2021

SAU: Smooth activation function using convolution with approximate identities

Well-known activation functions like ReLU or Leaky ReLU are non-differen...
research
05/29/2019

On the Expected Dynamics of Nonlinear TD Learning

While there are convergence guarantees for temporal difference (TD) lear...
research
05/21/2023

Understanding Multi-phase Optimization Dynamics and Rich Nonlinear Behaviors of ReLU Networks

The training process of ReLU neural networks often exhibits complicated ...
research
11/22/2019

Neural Networks Learning and Memorization with (almost) no Over-Parameterization

Many results in recent years established polynomial time learnability of...
research
11/10/2021

SGD Through the Lens of Kolmogorov Complexity

We prove that stochastic gradient descent (SGD) finds a solution that ac...
research
02/27/2017

SGD Learns the Conjugate Kernel Class of the Network

We show that the standard stochastic gradient decent (SGD) algorithm is ...
research
02/13/2021

On the convergence of group-sparse autoencoders

Recent approaches in the theoretical analysis of model-based deep learni...

Please sign up or login with your details

Forgot password? Click here to reset