On Extending Amdahl's law to Learn Computer Performance

10/15/2021
by   Chaitanya Poolla, et al.
0

The problem of learning parallel computer performance is investigated in the context of multicore processors. Given a fixed workload, the effect of varying system configuration on performance is sought. Conventionally, the performance speedup due to a single resource enhancement is formulated using Amdahl's law. However, in case of multiple configurable resources the conventional formulation results in several disconnected speedup equations that cannot be combined together to determine the overall speedup. To solve this problem, we propose to (1) extend Amdahl's law to accommodate multiple configurable resources into the overall speedup equation, and (2) transform the speedup equation into a multivariable regression problem suitable for machine learning. Using experimental data from two benchmarks (SPECCPU 2017 and PCMark 10) and four hardware platforms (Intel Xeon 8180M, AMD EPYC 7702P, Intel CoffeeLake 8700K, and AMD Ryzen 3900X), analytical models are developed and cross-validated. Findings indicate that in most cases, the models result in an average cross-validated accuracy higher than 95 proposed extension of Amdahl's law. The proposed methodology enables rapid generation of intelligent analytical models to support future industrial development, optimization, and simulation needs.

READ FULL TEXT

page 14

page 16

research
01/09/2019

Three Other Models of Computer System Performance

This note argues for more use of simple models beyond Amdahl's Law: Bott...
research
12/21/2022

Speedup and efficiency of computational parallelization: A unifying approach and asymptotic analysis

In high performance computing environments, we observe an ongoing increa...
research
10/28/2018

Learning with Analytical Models

To understand and predict the performance of parallel and distributed pr...
research
05/03/2019

When parallel speedups hit the memory wall

After Amdahl's trailblazing work, many other authors proposed analytical...
research
11/06/2015

Evaluation of the Intel Xeon Phi and NVIDIA K80 as accelerators for two-dimensional panel codes

To predict the properties of fluid flow over a solid geometry is an impo...
research
11/13/2017

Accelerating HPC codes on Intel(R) Omni-Path Architecture networks: From particle physics to Machine Learning

We discuss practical methods to ensure near wirespeed performance from c...
research
09/07/2021

Multi-Level Quickening: Ten Years Later

This paper presents important performance improvements for interpreters,...

Please sign up or login with your details

Forgot password? Click here to reset