Coordinated Management of Processor Configuration and Cache Partitioning to Optimize Energy under QoS Constraints

11/12/2019
by   Mehrzad Nejat, et al.
0

An effective way to improve energy efficiency is to throttle hardware resources to meet a certain performance target, specified as a QoS constraint, associated with all applications running on a multicore system. Prior art has proposed resource management (RM) frameworks in which the share of the last-level cache (LLC) assigned to each processor and the voltage-frequency (VF) setting for each processor is managed in a coordinated fashion to reduce energy. A drawback of such a scheme is that, while one core gives up LLC resources for another core, the performance drop must be compensated by a higher VF setting which leads to a quadratic increase in energy consumption. By allowing each core to be adapted to exploit instruction and memory-level parallelism (ILP/MLP), substantially higher energy savings are enabled. This paper proposes a coordinated RM for LLC partitioning, processor adaptation, and per-core VF scaling. A first contribution is a systematic study of the resource trade-offs enabled when trading between the three classes of resources in a coordinated fashion. A second contribution is a new RM framework that utilizes these trade-offs to save more energy. Finally, a challenge to accurately model the impact of resource throttling on performance is to predict the amount of MLP with high accuracy. To this end, the paper contributes with a mechanism that estimates the effect of MLP over different processor configurations and LLC allocations. Overall, we show that up to 18 and on average 10

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset