Optimal control policies for resource allocation in the Cloud: comparison between Markov decision process and heuristic approaches

04/30/2021
by   Thomas Tournaire, et al.
0

We consider an auto-scaling technique in a cloud system where virtual machines hosted on a physical node are turned on and off depending on the queue's occupation (or thresholds), in order to minimise a global cost integrating both energy consumption and performance. We propose several efficient optimisation methods to find threshold values minimising this global cost: local search heuristics coupled with aggregation of Markov chain and with queues approximation techniques to reduce the execution time and improve the accuracy. The second approach tackles the problem with a Markov Decision Process (MDP) for which we proceed to a theoretical study and provide theoretical comparison with the first approach. We also develop structured MDP algorithms integrating hysteresis properties. We show that MDP algorithms (value iteration, policy iteration) and especially structured MDP algorithms outperform the devised heuristics, in terms of time execution and accuracy. Finally, we propose a cost model for a real scenario of a cloud system to apply our optimisation algorithms and show their relevance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset