Distributed Training for Deep Learning Models On An Edge Computing Network Using ShieldedReinforcement Learning

06/01/2022
by   Tanmoy Sen, et al.
0

Edge devices with local computation capability has made distributed deep learning training on edges possible. In such method, the cluster head of a cluster of edges schedules DL training jobs from the edges. Using such centralized scheduling method, the cluster head knows all loads of edges, which can avoid overloading the cluster edges, but the head itself may become overloaded. To handle this problem, we propose a multi-agent RL (MARL) system that enables each edge to schedule its jobs using RL. However, without coordination among edges, action collision may occur, in which multiple edges schedule tasks to the same edge and make it overloaded. For this reason, we propose a system called Shielded ReinfOrcement learning (RL) based DL training on Edges (SROLE). In SROLE, the shield deployed in an edge checks action collisions and provides alternative actions to avoid collisions. As the central shield for entire cluster may become a bottleneck, we further propose a decentralized shielding method, where different shields are responsible for different regions in the cluster and they coordinate to avoid action collisions on the region boundaries. Our emulation and real device experiments show SROLE reduces training time by 59

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset