ScaDLES: Scalable Deep Learning over Streaming data at the Edge

01/21/2023
by   Sahil Tyagi, et al.
0

Distributed deep learning (DDL) training systems are designed for cloud and data-center environments that assumes homogeneous compute resources, high network bandwidth, sufficient memory and storage, as well as independent and identically distributed (IID) data across all nodes. However, these assumptions don't necessarily apply on the edge, especially when training neural networks on streaming data in an online manner. Computing on the edge suffers from both systems and statistical heterogeneity. Systems heterogeneity is attributed to differences in compute resources and bandwidth specific to each device, while statistical heterogeneity comes from unbalanced and skewed data on the edge. Different streaming-rates among devices can be another source of heterogeneity when dealing with streaming data. If the streaming rate is lower than training batch-size, device needs to wait until enough samples have streamed in before performing a single iteration of stochastic gradient descent (SGD). Thus, low-volume streams act like stragglers slowing down devices with high-volume streams in synchronous training. On the other hand, data can accumulate quickly in the buffer if the streaming rate is too high and the devices can't train at line-rate. In this paper, we introduce ScaDLES to efficiently train on streaming data at the edge in an online fashion, while also addressing the challenges of limited bandwidth and training with non-IID data. We empirically show that ScaDLES converges up to 3.29 times faster compared to conventional distributed SGD.

READ FULL TEXT
research
11/05/2019

Asynchronous Online Federated Learning for Edge Devices

Federated learning (FL) is a machine learning paradigm where a shared ce...
research
01/27/2022

Data-Quality Based Scheduling for Federated Edge Learning

FEderated Edge Learning (FEEL) has emerged as a leading technique for pr...
research
08/12/2022

Efficient Transmission and Reconstruction of Dependent Data Streams via Edge Sampling

Data stream processing is an increasingly important topic due to the pre...
research
11/24/2018

Hydra: A Peer to Peer Distributed Training & Data Collection Framework

The world needs diverse and unbiased data to train deep learning models....
research
05/26/2021

Cost models for geo-distributed massively parallel streaming analytics

This report is part of the DataflowOpt project on optimization of modern...
research
05/18/2020

Scaling-up Distributed Processing of Data Streams for Machine Learning

Emerging applications of machine learning in numerous areas involve cont...
research
03/05/2019

Streaming Batch Eigenupdates for Hardware Neuromorphic Networks

Neuromorphic networks based on nanodevices, such as metal oxide memristo...

Please sign up or login with your details

Forgot password? Click here to reset