GPU-initiated Fine-grained Overlap of Collective Communication with Computation

05/11/2023
by   Kishore Punniyamurthy, et al.
0

In order to satisfy their ever increasing capacity and compute requirements, many machine learning models are distributed across multiple nodes using space-efficient parallelism strategies. As a result, collective communications are often on the critical path, and hiding their latency by overlapping kernel-granular communication and computation is difficult due to the absence of independent computation. In this work, we propose fusing computation with communication using GPU-initiated networking, and leverage GPUs' massive parallelism to enable fine-grained overlap of the fused operations. We have developed a single, self-contained GPU kernel where workgroups (WGs) immediately communicate their results to remote GPUs when they complete their computation. Meanwhile, other WGs within the same kernel perform overlapping computation, maintaining high ALU utilization. Furthermore, we propose zero-copy optimizations for peer-to-peer GPU communication where the data computed by one GPU is directly written to the destination buffers within the peer GPUs, eliminating intermediate stores and extra buffering. Our approach leverages the emerging multi-node GPU system trend where GPUs are physically close to network with direct GPU-NIC interconnects. We demonstrate our approach by creating an embedding + All-to-All fused kernel which overlaps embedding operations and the dependent all-to-all collective in DLRM models. We evaluate our approach both using simulation and real hardware. Our evaluations show that our approach can effectively overlap All-to-All communication with embedding computations, subsequently reducing their combined execution time by 31 58 Scale-out simulations indicate that our approach reduces DLRM execution time by  10

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset