MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems

10/21/2021
by   Steven Farrell, et al.
29

Scientific communities are increasingly adopting machine learning and deep learning models in their applications to accelerate scientific insights. High performance computing systems are pushing the frontiers of performance with a rich diversity of hardware resources and massive scale-out capabilities. There is a critical need to understand fair and effective benchmarking of machine learning applications that are representative of real-world scientific use cases. MLPerf is a community-driven standard to benchmark machine learning workloads, focusing on end-to-end performance metrics. In this paper, we introduce MLPerf HPC, a benchmark suite of large-scale scientific machine learning training applications driven by the MLCommons Association. We present the results from the first submission round, including a diverse set of some of the world's largest HPC systems. We develop a systematic framework for their joint analysis and compare them in terms of data staging, algorithmic convergence, and compute performance. As a result, we gain a quantitative understanding of optimizations on different subsystems such as staging and on-node loading of data, compute-unit utilization, and communication scheduling, enabling overall >10 × (end-to-end) performance improvements through system scaling. Notably, our analysis shows a scale-dependent interplay between the dataset size, a system's memory hierarchy, and training convergence that underlines the importance of near-compute storage. To overcome the data-parallel scalability challenge at large batch sizes, we discuss specific learning techniques and hybrid data-and-model parallelism that are effective on large systems. We conclude by characterizing each benchmark with respect to low-level memory, I/O, and network behavior to parameterize extended roofline performance models in future rounds.

READ FULL TEXT

page 1

page 5

research
07/27/2019

HPC AI500: A Benchmark Suite for HPC AI Systems

In recent years, with the trend of applying deep learning (DL) in high p...
research
10/25/2021

Scientific Machine Learning Benchmarks

The breakthrough in Deep Learning neural networks has transformed the us...
research
08/17/2020

AIPerf: Automated machine learning as an AI-HPC benchmark

The plethora of complex artificial intelligence (AI) algorithms and avai...
research
06/01/2017

On the Scalability of Data Reduction Techniques in Current and Upcoming HPC Systems from an Application Perspective

We implement and benchmark parallel I/O methods for the fully-manycore d...
research
07/27/2023

Benchmarking Performance of Deep Learning Model for Material Segmentation on Two HPC Systems

Performance Benchmarking of HPC systems is an ongoing effort that seeks ...
research
11/03/2022

Making Machine Learning Datasets and Models FAIR for HPC: A Methodology and Case Study

The FAIR Guiding Principles aim to improve the findability, accessibilit...
research
03/03/2021

VELOC: VEry Low Overhead Checkpointing in the Age of Exascale

Checkpointing large amounts of related data concurrently to stable stora...

Please sign up or login with your details

Forgot password? Click here to reset