Simulation-based Optimization and Sensibility Analysis of MPI Applications: Variability Matters

02/15/2021
by   Tom Cornebize, et al.
0

Finely tuning MPI applications and understanding the influence of keyparameters (number of processes, granularity, collective operationalgorithms, virtual topology, and process placement) is critical toobtain good performance on supercomputers. With the high consumptionof running applications at scale, doing so solely to optimize theirperformance is particularly costly. Havinginexpensive but faithful predictions of expected performance could bea great help for researchers and system administrators. Themethodology we propose decouples the complexity of the platform, whichis captured through statistical models of the performance of its maincomponents (MPI communications, BLAS operations), from the complexityof adaptive applications by emulating the application and skippingregular non-MPI parts of the code. We demonstrate the capability of our method with High-PerformanceLinpack (HPL), the benchmark used to rank supercomputers in theTOP500, which requires careful tuning. We briefly present (1) how theopen-source version of HPL can be slightly modified to allow a fastemulation on a single commodity server at the scale of asupercomputer. Then we present (2) an extensive (in)validation studythat compares simulation with real experiments and demonstrates our ability to predict theperformance of HPL within a few percent consistently. This study allows us toidentify the main modeling pitfalls (e.g., spatial and temporal nodevariability or network heterogeneity and irregular behavior) that needto be considered. Last, we show (3) how our “surrogate” allowsstudying several subtle HPL parameter optimization problems whileaccounting for uncertainty on the platform.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2020

Elastic execution of checkpointed MPI applications

MPI applications begin with a fixed number of rank and, by default, the ...
research
10/10/2018

ECHO-3DHPC: Advance the performance of astrophysics simulations with code modernization

We present recent developments in the parallelization scheme of ECHO-3DH...
research
12/30/2019

Performance Evaluation of Dynamic Scaling on MPI

Dynamic scaling aims to elastically change the number of processes durin...
research
04/23/2020

Accurate runtime selection of optimal MPI collective algorithms using analytical performance modelling

The performance of collective operations has been a critical issue since...
research
12/10/2021

MANA-2.0: A Future-Proof Design for Transparent Checkpointing of MPI at Scale

MANA-2.0 is a scalable, future-proof design for transparent checkpointin...
research
04/08/2023

C-Coll: Introducing Error-bounded Lossy Compression into MPI Collectives

With the ever-increasing computing power of supercomputers and the growi...
research
01/11/2018

MXNET-MPI: Embedding MPI parallelism in Parameter Server Task Model for scaling Deep Learning

Existing Deep Learning frameworks exclusively use either Parameter Serve...

Please sign up or login with your details

Forgot password? Click here to reset