Offsite Autotuning Approach – Performance Model Driven Autotuning Applied to Parallel Explicit ODE Methods

04/07/2020
by   Johannes Seiferth, et al.
0

Autotuning techniques are a promising approach to minimize the otherwise tedious manual effort of optimizing scientific applications for a specific target platform. Ideally, an autotuning approach is capable of reliably identifying the most efficient implementation variant(s) for a new target system or new characteristics of the input by applying suitable program transformations and analytic models. In this work, we introduce Offsite, an offline autotuning approach which automates this selection process at installation time by rating implementation variants based on an analytic performance model without requiring time-consuming runtime experiments. From abstract multilevel YAML description languages, Offsite automatically derives optimized, platform-specific and problem-specific code of possible implementation variants and applies the performance model to these implementation variants. We apply Offsite to parallel numerical methods for ordinary differential equations (ODEs). In particular, we investigate tuning a specific class of explicit ODE solvers (PIRK methods) for various initial value problems (IVPs) on shared-memory systems. Our experiments demonstrate that Offsite is able to reliably identify a set of the most efficient implementation variants for given test configurations (ODE solver, IVP, platform) and is capable of effectively handling important autotuning scenarios.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2022

Parallelizing Explicit and Implicit Extrapolation Methods for Ordinary Differential Equations

Numerically solving ordinary differential equations (ODEs) is a naturall...
research
04/21/2012

Paraiso : An Automated Tuning Framework for Explicit Solvers of Partial Differential Equations

We propose Paraiso, a domain specific language embedded in functional pr...
research
03/15/2021

Meta-Solver for Neural Ordinary Differential Equations

A conventional approach to train neural ordinary differential equations ...
research
01/14/2022

Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation

Auto-scheduling for tensor programs is a process where a search algorith...
research
12/10/2020

A Decision Tree Lifted Domain for Analyzing Program Families with Numerical Features (Extended Version)

Lifted (family-based) static analysis by abstract interpretation is capa...
research
01/27/2023

Diffusive Representations for the Numerical Evaluation of Fractional Integrals

Diffusive representations of fractional differential and integral operat...
research
07/15/2022

mAPN: Modeling, Analysis, and Exploration of Algorithmic and Parallelism Adaptivity

Using parallel embedded systems these days is increasing. They are getti...

Please sign up or login with your details

Forgot password? Click here to reset