Distributed Stochastic Gradient Descent with Cost-Sensitive and Strategic Agents

12/05/2022
by   Abdullah Basar Akbay, et al.
0

This study considers a federated learning setup where cost-sensitive and strategic agents train a learning model with a server. During each round, each agent samples a minibatch of training data and sends his gradient update. As an increasing function of his minibatch size choice, the agent incurs a cost associated with the data collection, gradient computation and communication. The agents have the freedom to choose their minibatch size and may even opt out from training. To reduce his cost, an agent may diminish his minibatch size, which may also cause an increase in the noise level of the gradient update. The server can offer rewards to compensate the agents for their costs and to incentivize their participation but she lacks the capability of validating the true minibatch sizes of the agents. To tackle this challenge, the proposed reward mechanism evaluates the quality of each agent's gradient according to the its distance to a reference which is constructed from the gradients provided by other agents. It is shown that the proposed reward mechanism has a cooperative Nash equilibrium in which the agents determine the minibatch size choices according to the requests of the server.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2022

Federated Learning Using Variance Reduced Stochastic Gradient for Probabilistically Activated Agents

This paper proposes an algorithm for Federated Learning (FL) with a two-...
research
10/21/2021

Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning

This paper considers the problem of resilient distributed optimization a...
research
12/09/2017

Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent

In this paper, we propose a novel approach to automatically determine th...
research
06/07/2021

Asynchronous Distributed Optimization with Redundancy in Cost Functions

This paper considers the problem of asynchronous distributed multi-agent...
research
03/18/2019

Surrogate Optimal Control for Strategic Multi-Agent Systems

This paper studies how to design a platform to optimally control constra...
research
07/13/2023

Online Distributed Learning with Quantized Finite-Time Coordination

In this paper we consider online distributed learning problems. Online d...
research
06/07/2018

Re-evaluating evaluation

Progress in machine learning is measured by careful evaluation on proble...

Please sign up or login with your details

Forgot password? Click here to reset