DeepAI AI Chat
Log In Sign Up

Network-Aware Optimization of Distributed Learning for Fog Computing

04/17/2020
by   Yuwei Tu, et al.
0

Fog computing promises to enable machine learning tasks to scale to large amounts of data by distributing processing across connected devices. Two key challenges to achieving this goal are heterogeneity in devices compute resources and topology constraints on which devices can communicate with each other. We address these challenges by developing the first network-aware distributed learning optimization methodology where devices optimally share local data processing and send their learnt parameters to a server for aggregation at certain time intervals. Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points. We analytically characterize the optimal data transfer solution for different fog network topologies, showing for example that the value of offloading is approximately linear in the range of computing costs in the network. Our subsequent experiments on testbed datasets we collect confirm that our algorithms are able to improve network resource utilization substantially without sacrificing the accuracy of the learned model. In these experiments, we also study the effect of network dynamics, quantifying the impact of nodes entering or exiting the network on model learning and resource costs.

READ FULL TEXT

page 1

page 7

page 9

page 10

03/26/2021

Infinity: A Scalable Infrastructure for In-Network Applications

Network programmability is an area of research both defined by its poten...
07/04/2021

FedFog: Network-Aware Optimization of Federated Learning over Wireless Fog-Cloud Systems

Federated learning (FL) is capable of performing large distributed machi...
01/04/2021

Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation

The conventional federated learning (FedL) architecture distributes mach...
07/18/2020

Multi-Stage Hybrid Federated Learning over Large-Scale Wireless Fog Networks

One of the popular methods for distributed machine learning (ML) is fede...
10/28/2022

Aggregation in the Mirror Space (AIMS): Fast, Accurate Distributed Machine Learning in Military Settings

Distributed machine learning (DML) can be an important capability for mo...
12/09/2020

Optimising cost vs accuracy of decentralised analytics in fog computing environments

The exponential growth of devices and data at the edges of the Internet ...