Each round in Differential Private Stochastic Gradient Descent (DPSGD)
t...
In federated learning collaborative learning takes place by a set of cli...
Classical differential private DP-SGD implements individual clipping wit...
Recent years have witnessed a trend of secure processor design in both
a...
We consider the Hogwild! setting where clients use local SGD iterations ...
Hogwild! implements asynchronous Stochastic Gradient Descent (SGD) where...
The feasibility of federated learning is highly constrained by the
serve...
Recent defenses published at venues like NIPS, ICML, ICLR and CVPR are m...
We propose a novel hybrid stochastic policy gradient estimator by combin...
In this paper, we provide a unified convergence analysis for a class of
...
We propose a novel defense against all existing gradient based adversari...
The total complexity (measured as the total number of gradient computati...
We propose a novel diminishing learning rate scheme, coined
Decreasing-T...
We study convergence of Stochastic Gradient Descent (SGD) for strongly c...
We study Stochastic Gradient Descent (SGD) with diminishing step sizes f...
Stochastic gradient descent (SGD) is the optimization algorithm of choic...