Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents
This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server.
READ FULL TEXT