Boosting Distributed Machine Learning Training Through Loss-tolerant Transmission Protocol

05/07/2023
by   Zixuan Chen, et al.
0

Distributed Machine Learning (DML) systems are utilized to enhance the speed of model training in data centers (DCs) and edge nodes. The Parameter Server (PS) communication architecture is commonly employed, but it faces severe long-tail latency caused by many-to-one "incast" traffic patterns, negatively impacting training throughput. To address this challenge, we design the Loss-tolerant Transmission Protocol (LTP), which permits partial loss of gradients during synchronization to avoid unneeded retransmission and contributes to faster synchronization per iteration. LTP implements loss-tolerant transmission through out-of-order transmission and out-of-order Acknowledges (ACKs). LTP employs Early Close to adjust the loss-tolerant threshold based on network conditions and Bubble Filling for data correction to maintain training accuracy. LTP is implemented by C++ and integrated into PyTorch. Evaluations on a testbed of 8 worker nodes and one PS node demonstrate that LTP can significantly improve DML training task throughput by up to 30x compared to traditional TCP congestion controls, with no sacrifice to final accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset