A Distributed Deep Reinforcement Learning Technique for Application Placement in Edge and Fog Computing Environments

by   Mohammad Goudarzi, et al.

Fog/Edge computing is a novel computing paradigm supporting resource-constrained Internet of Things (IoT) devices by the placement of their tasks on the edge and/or cloud servers. Recently, several Deep Reinforcement Learning (DRL)-based placement techniques have been proposed in fog/edge computing environments, which are only suitable for centralized setups. The training of well-performed DRL agents requires manifold training data while obtaining training data is costly. Hence, these centralized DRL-based techniques lack generalizability and quick adaptability, thus failing to efficiently tackle application placement problems. Moreover, many IoT applications are modeled as Directed Acyclic Graphs (DAGs) with diverse topologies. Satisfying dependencies of DAG-based IoT applications incur additional constraints and increase the complexity of placement problems. To overcome these challenges, we propose an actor-critic-based distributed application placement technique, working based on the IMPortance weighted Actor-Learner Architectures (IMPALA). IMPALA is known for efficient distributed experience trajectory generation that significantly reduces the exploration costs of agents. Besides, it uses an adaptive off-policy correction method for faster convergence to optimal solutions. Our technique uses recurrent layers to capture temporal behaviors of input data and a replay buffer to improve the sample efficiency. The performance results, obtained from simulation and testbed experiments, demonstrate that our technique significantly improves the execution cost of IoT applications up to 30% compared to its counterparts.


Deep Reinforcement Learning-based Scheduling in Edge and Fog Computing Environments

Edge/fog computing, as a distributed computing paradigm, satisfies the l...

Placement is not Enough: Embedding with Proactive Stream Mapping on the Heterogenous Edge

Edge computing is naturally suited to the applications generated by Inte...

Dynamic Scheduling for Stochastic Edge-Cloud Computing Environments using A3C learning and Residual Recurrent Neural Networks

The ubiquitous adoption of Internet-of-Things (IoT) based applications h...

A Distributed Application Placement and Migration Management Techniques for Edge and Fog Computing Environments

Fog/Edge computing model allows harnessing of resources in the proximity...

A Deep Reinforcement Learning Approach to Multi-component Job Scheduling in Edge Computing

We are interested in the optimal scheduling of a collection of multi-com...

TCEP: Transitions in Operator Placement to Adapt to Dynamic Network Environments

Distributed Complex Event Processing (DCEP) is a commonly used paradigm ...

An Adaptive Device-Edge Co-Inference Framework Based on Soft Actor-Critic

Recently, the applications of deep neural network (DNN) have been very p...

Please sign up or login with your details

Forgot password? Click here to reset