Extending the Message Passing Interface (MPI) with User-Level Schedules

09/25/2019
by   Derek Schafer, et al.
0

Composability is one of seven reasons for the long-standing and continuing success of MPI. Extending MPI by composing its operations with user-level operations provides useful integration with the progress engine and completion notification methods of MPI. However, the existing extensibility mechanism in MPI (generalized requests) is not widely utilized and has significant drawbacks. MPI can be generalized via scheduled communication primitives, for example, by utilizing implementation techniques from existing MPI-3 nonblocking collectives and from forthcoming MPI-4 persistent and partitioned APIs. Non-trivial schedules are used internally in some MPI libraries; but, they are not accessible to end-users. Message-based communication patterns can be built as libraries on top of MPI. Such libraries can have comparable implementation maturity and potentially higher performance than MPI library code, but do not require intimate knowledge of the MPI implementation. Libraries can provide performance-portable interfaces that cross MPI implementation boundaries. The ability to compose additional user-defined operations using the same progress engine benefits all kinds of general purpose HPC libraries. We propose a definition for MPI schedules: a user-level programming model suitable for creating persistent collective communication composed with new application-specific sequences of user-defined operations managed by MPI and fully integrated with MPI progress and completion notification. The API proposed offers a path to standardization for extensible communication schedules involving user-defined operations. Our approach has the potential to introduce event-driven programming into MPI (beyond the tools interface), although connecting schedules with events comprises future work. Early performance results described here are promising and indicate strong overlap potential.

READ FULL TEXT

page 1

page 7

page 8

page 9

research
09/13/2023

MPI Advance : Open-Source Message Passing Optimizations

The large variety of production implementations of the message passing i...
research
08/07/2023

Quantifying the Performance Benefits of Partitioned Communication in MPI

Partitioned communication was introduced in MPI 4.0 as a user-friendly i...
research
06/20/2023

A C++20 Interface for MPI 4.0

We present a modern C++20 interface for MPI 4.0. The interface utilizes ...
research
07/22/2021

MPIs Language Bindings are Holding MPI Back

Over the past two decades, C++ has been adopted as a major HPC language ...
research
09/13/2019

AITuning: Machine Learning-based Tuning Tool for Run-Time Communication Libraries

In this work, we address the problem of tuning communication libraries b...
research
04/25/2018

Fast parallel multidimensional FFT using advanced MPI

We present a new method for performing global redistributions of multidi...
research
08/22/2023

MPI Application Binary Interface Standardization

MPI is the most widely used interface for high-performance computing (HP...

Please sign up or login with your details

Forgot password? Click here to reset