DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip Resource Scheduling

by   Tegg Taekyong Sung, et al.

In this paper, we present a novel scheduling solution for a class of System-on-Chip (SoC) systems where heterogeneous chip resources (DSP, FPGA, GPU, etc.) must be efficiently scheduled for continuously arriving hierarchical jobs with their tasks represented by directed acyclic graph. Traditionally, heuristic algorithms have been widely used for many resource scheduling domains, and Heterogeneous Earliest Finish Time (HEFT) has been a dominating state-of-the-art technique across a broad range of heterogeneous resource scheduling domains over many years. Despite their long-standing popularity, however, HEFT-like algorithms are known to be vulnerable to even a small amount of noise added to the environment. Our Deep Reinforcement Learning (DRL)-based SoC Scheduler (DeepSoCS), capable of learning the 'best' task ordering under dynamic environment changes, overcomes the brittleness of rule-based schedulers such as HEFT with significantly higher performance across different types of jobs. We describe a DeepSoCS design process using a real-time heterogeneous SoC scheduling emulator, discuss major challenges, and present two novel neural network design features that lead to outperforming HEFT: (i) hierarchical job- and task-graph embedding; and (ii) efficient use of real-time task information in the state space. Furthermore, we introduce effective techniques to address two fundamental challenges present in our environment: delayed consequences and joint actions. Through extensive simulation study, we show that our DeepSoCS exhibits significantly higher performance of job execution time than that of HEFT with higher level of robustness under realistic noise conditions. We conclude with a discussion of the potential improvements for our DeepSoCS neural scheduler.


DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC) Resource Scheduling

In this paper, we present a novel scheduling solution for a class of Sys...

SoCRATES: System-on-Chip Resource Adaptive Scheduling using Deep Reinforcement Learning

Deep Reinforcement Learning (DRL) is being increasingly applied to the p...

Learning to Schedule DAG Tasks

Scheduling computational tasks represented by directed acyclic graphs (D...

A Scalable and Reproducible System-on-Chip Simulation for Reinforcement Learning

Deep Reinforcement Learning (DRL) underlies in a simulated environment a...

Learning to Optimize DAG Scheduling in Heterogeneous Environment

Directed Acyclic Graph (DAG) scheduling in a heterogeneous environment i...

A Scalable Deep Reinforcement Learning Model for Online Scheduling Coflows of Multi-Stage Jobs for High Performance Computing

Coflow is a recently proposed networking abstraction to help improve the...

A Dynamic, Hierarchical Resource Model for Converged Computing

Extreme dynamic heterogeneity in high performance computing systems and ...

Please sign up or login with your details

Forgot password? Click here to reset