Stochastic Control of Computation Offloading to a Helper with a Dynamically Loaded CPU

02/27/2018
by   Yunzheng Tao, et al.
0

Due to densification of wireless networks, there exist abundance of idling computation resources at edge devices. These resources can be scavenged by offloading heavy computation tasks from small IoT devices in proximity, thereby overcoming their limitations and lengthening their battery lives. However, unlike dedicated servers, the spare resources offered by edge helpers are random and intermittent. Thus, it is essential for a user to intelligently control the amounts of data for offloading and local computing so as to ensure a computation task can be finished in time consuming minimum energy. In this paper, we design energy-efficient control policies in a computation offloading system with a random channel and a helper with a dynamically loaded CPU. Specifically, the policy adopted by the helper aims at determining the sizes of offloaded and locally-computed data for a given task in different slots such that the total energy consumption for transmission and local CPU is minimized under a task-deadline constraint. As the result, the polices endow an offloading user robustness against channel-and-helper randomness besides balancing offloading and local computing. By modeling the channel and helper-CPU as Markov chains, the problem of offloading control is converted into a Markov-decision process. Though dynamic programming (DP) for numerically solving the problem does not yield the optimal policies in closed form, we leverage the procedure to quantify the optimal policy structure and apply the result to design optimal or sub-optimal policies. For different cases ranging from zero to large buffers, the low-complexity of the policies overcomes the "curse-of-dimensionality" in DP arising from joint consideration of channel, helper CPU and buffer states.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset