Neural Heterogeneous Scheduler

06/09/2019
by   Tegg Taekyong Sung, et al.
0

Access to parallel and distributed computation has enabled researchers and developers to improve algorithms and performance in many applications. Recent research has focused on next generation special purpose systems with multiple kinds of coprocessors, known as heterogeneous system-on-chips (SoC). In this paper, we introduce a method to intelligently schedule–and learn to schedule–a stream of tasks to available processing elements in such a system. We use deep reinforcement learning enabling complex sequential decision making and empirically show that our reinforcement learning system provides for a viable, better alternative to conventional scheduling heuristics with respect to minimizing execution time.

READ FULL TEXT

page 4

page 5

research
03/06/2022

Hierarchically Structured Scheduling and Execution of Tasks in a Multi-Agent Environment

In a warehouse environment, tasks appear dynamically. Consequently, a ta...
research
10/24/2022

OSS Mentor A framework for improving developers contributions via deep reinforcement learning

In open source project governance, there has been a lot of concern about...
research
08/28/2023

Edge Generation Scheduling for DAG Tasks using Deep Reinforcement Learning

Directed acyclic graph (DAG) tasks are currently adopted in the real-tim...
research
02/01/2018

Elements of Effective Deep Reinforcement Learning towards Tactical Driving Decision Making

Tactical driving decision making is crucial for autonomous driving syste...
research
07/20/2018

Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning

We demonstrate how to learn efficient heuristics for automated reasoning...
research
03/24/2020

Learn to Schedule (LEASCH): A Deep reinforcement learning approach for radio resource scheduling in the 5G MAC layer

Network management tools are usually inherited from one generation to an...

Please sign up or login with your details

Forgot password? Click here to reset