HammingMesh: A Network Topology for Large-Scale Deep Learning

09/03/2022
by   Torsten Hoefler, et al.
25

Numerous microarchitectural optimizations unlocked tremendous processing power for deep neural networks that in turn fueled the AI revolution. With the exhaustion of such optimizations, the growth of modern AI is now gated by the performance of training systems, especially their data movement. Instead of focusing on single accelerators, we investigate data-movement characteristics of large-scale training at full system scale. Based on our workload analysis, we design HammingMesh, a novel network topology that provides high bandwidth at low cost with high job scheduling flexibility. Specifically, HammingMesh can support full bandwidth and isolation to deep learning training jobs with two dimensions of parallelism. Furthermore, it also supports high global bandwidth for generic traffic. Thus, HammingMesh will power future large-scale deep learning systems with extreme bandwidth requirements.

READ FULL TEXT

page 2

page 3

research
08/25/2021

Node-Based Job Scheduling for Large Scale Simulations of Short Running Jobs

Diverse workloads such as interactive supercomputing, big data analysis,...
research
02/20/2023

TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training

Sparsely gated Mixture-of-Expert (MoE) has demonstrated its effectivenes...
research
01/30/2018

Parameter Hub: High Performance Parameter Servers for Efficient Distributed Deep Neural Network Training

Most work in the deep learning systems community has focused on faster i...
research
04/12/2021

Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation Models

Deep learning recommendation models (DLRMs) are used across many busines...
research
02/19/2020

NeuroFabric: Identifying Ideal Topologies for Training A Priori Sparse Networks

Long training times of deep neural networks are a bottleneck in machine ...
research
04/17/2019

Terra: Scalable Cross-Layer GDA Optimizations

Geo-distributed analytics (GDA) frameworks transfer large datasets over ...
research
09/11/2020

Accelerating Recommender Systems via Hardware "scale-in"

In today's era of "scale-out", this paper makes the case that a speciali...

Please sign up or login with your details

Forgot password? Click here to reset