MLComp: A Methodology for Machine Learning-based Performance Estimation and Adaptive Selection of Pareto-Optimal Compiler Optimization Sequences

12/09/2020
by   Alessio Colucci, et al.
0

Embedded systems have proliferated in various consumer and industrial applications with the evolution of Cyber-Physical Systems and the Internet of Things. These systems are subjected to stringent constraints so that embedded software must be optimized for multiple objectives simultaneously, namely reduced energy consumption, execution time, and code size. Compilers offer optimization phases to improve these metrics. However, proper selection and ordering of them depends on multiple factors and typically requires expert knowledge. State-of-the-art optimizers facilitate different platforms and applications case by case, and they are limited by optimizing one metric at a time, as well as requiring a time-consuming adaptation for different targets through dynamic profiling. To address these problems, we propose the novel MLComp methodology, in which optimization phases are sequenced by a Reinforcement Learning-based policy. Training of the policy is supported by Machine Learning-based analytical models for quick performance estimation, thereby drastically reducing the time spent for dynamic profiling. In our framework, different Machine Learning models are automatically tested to choose the best-fitting one. The trained Performance Estimator model is leveraged to efficiently devise Reinforcement Learning-based multi-objective policies for creating quasi-optimal phase sequences. Compared to state-of-the-art estimation models, our Performance Estimator model achieves lower relative error (<2 over multiple platforms and application domains. Our Phase Selection Policy improves execution time and energy consumption of a given code by up to 12 6 be trained efficiently for any target platform and application domain.

READ FULL TEXT
research
02/27/2018

Less is More: Exploiting the Standard Compiler Optimization Levels for Better Performance and Energy Consumption

This paper presents the interesting observation that by performing fewer...
research
02/22/2023

Multi-objective optimization of energy consumption and execution time in a single level cache memory for embedded systems

Current embedded systems are specifically designed to run multimedia app...
research
07/27/2022

POSET-RL: Phase ordering for Optimizing Size and Execution Time using Reinforcement Learning

The ever increasing memory requirements of several applications has led ...
research
11/29/2018

TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks

Embedded deep learning platforms have witnessed two simultaneous improve...
research
01/05/2022

Dynamic GPU Energy Optimization for Machine Learning Training Workloads

GPUs are widely used to accelerate the training of machine learning work...
research
01/19/2021

Dynamic Bicycle Dispatching of Dockless Public Bicycle-sharing Systems using Multi-objective Reinforcement Learning

As a new generation of Public Bicycle-sharing Systems (PBS), the dockles...
research
03/25/2019

On the use of Deep Autoencoders for Efficient Embedded Reinforcement Learning

In autonomous embedded systems, it is often vital to reduce the amount o...

Please sign up or login with your details

Forgot password? Click here to reset