A Low-latency Communication Design for Brain Simulations

05/14/2022
by   Xin Du, et al.
0

Brain simulation, as one of the latest advances in artificial intelligence, facilitates better understanding about how information is represented and processed in the brain. The extreme complexity of human brain makes brain simulations only feasible upon high-performance computing platforms. Supercomputers with a large number of interconnected graphical processing units (GPUs) are currently employed for supporting brain simulations. Therefore, high-throughput low-latency inter-GPU communications in supercomputers play a crucial role in meeting the performance requirements of brain simulation as a highly time-sensitive application. In this paper, we first provide an overview of the current parallelizing technologies for brain simulations using multi-GPU architectures. Then, we analyze the challenges to communications for brain simulation and summarize guidelines for communication design to address such challenges. Furthermore, we propose a partitioning algorithm and a two-level routing method to achieve efficient low-latency communications in multi-GPU architecture for brain simulation. We report experiment results obtained on a supercomputer with 2,000 GPUs for simulating a brain model with 10 billion neurons to show that our approach can significantly improve communication performance. We also discuss open issues and identify some research directions for low-latency communication design for brain simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2018

Efficient Tree Solver for Hines Matrices on the GPU

The human brain consists of a large number of interconnected neurons com...
research
12/12/2018

Real-time cortical simulations - Energy and interconnect scaling on distributed systems

We profile the impact of computation and inter-processor communication o...
research
02/27/2022

PARIS and ELSA: An Elastic Scheduling Algorithm for Reconfigurable Multi-GPU Inference Servers

In cloud machine learning (ML) inference systems, providing low latency ...
research
08/06/2018

Low-latency Networking: Where Latency Lurks and How to Tame It

While the current generation of mobile and fixed communication networks ...
research
08/07/2019

Performance Comparison for Neuroscience Application Benchmarks

Researchers within the Human Brain Project and related projects have in ...
research
05/13/2020

MPI+OpenMP Tasking Scalability for Multi-Morphology Simulations of the Human Brain

The simulation of the behavior of the human brain is one of the most amb...
research
02/28/2023

Interconnect Bandwidth Heterogeneity on AMD MI250x and Infinity Fabric

Demand for low-latency and high-bandwidth data transfer between GPUs has...

Please sign up or login with your details

Forgot password? Click here to reset