Runtime Task Scheduling using Imitation Learning for Heterogeneous Many-Core Systems

Domain-specific systems-on-chip, a class of heterogeneous many-core systems, are recognized as a key approach to narrow down the performance and energy-efficiency gap between custom hardware accelerators and programmable processors. Reaching the full potential of these architectures depends critically on optimally scheduling the applications to available resources at runtime. Existing optimization-based techniques cannot achieve this objective at runtime due to the combinatorial nature of the task scheduling problem. As the main theoretical contribution, this paper poses scheduling as a classification problem and proposes a hierarchical imitation learning (IL)-based scheduler that learns from an Oracle to maximize the performance of multiple domain-specific applications. Extensive evaluations with six streaming applications from wireless communications and radar domains show that the proposed IL-based scheduler approximates an offline Oracle policy with more than 99 Furthermore, it achieves almost identical performance to the Oracle with a low runtime overhead and successfully adapts to new applications, many-core system configurations, and runtime variations in application characteristics.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 7

page 12

page 14

03/19/2020

DS3: A System-Level Domain-Specific System-on-Chip Simulation Framework

Heterogeneous systems-on-chip (SoCs) are highly favorable computing plat...
09/22/2021

DAS: Dynamic Adaptive Scheduling for Energy-Efficient Heterogeneous SoCs

Domain-specific systems-on-chip (DSSoCs) aim at bridging the gap between...
03/20/2020

An Energy-Aware Online Learning Framework for Resource Management in Heterogeneous Platforms

Mobile platforms must satisfy the contradictory requirements of fast res...
12/16/2021

Performant, Multi-objective Scheduling of Highly Interleaved Task Graphs on Heterogeneous System on Chip Devices

Performance-, power-, and energy-aware scheduling techniques play an ess...
09/09/2021

Fixing exposure bias with imitation learning needs powerful oracles

We apply imitation learning (IL) to tackle the NMT exposure bias problem...
08/22/2020

Online Adaptive Learning for Runtime Resource Management of Heterogeneous SoCs

Dynamic resource management has become one of the major areas of researc...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Homogeneous multi-core architectures have successfully exploited thread- and data-level parallelism to achieve performance and energy efficiency beyond the limits of single-core processors. While general-purpose computing achieves programming flexibility, it suffers from significant performance and energy efficiency gap when compared to special-purpose solutions. Domain-specific architectures, such as graphics processing units (GPUs) and neural network processors, are recognized as some of the most promising solutions to reduce this gap 

[11]. Domain-specific systems-on-chip (DSSoCs), a concrete instance of this new architecture, judiciously combine general-purpose cores, special-purpose processors, and hardware accelerators. DSSoCs approach the efficacy of fixed-function solutions for a specific domain while maintaining programming flexibility for other domains [9].

The success of DSSoCs depends critically on satisfying two intertwined requirements. First, the available processing elements (PEs) must be utilized optimally, at runtime, to execute the incoming tasks. For instance, scheduling all tasks to general-purpose cores may work, but diminishes the benefits of the special-purpose PEs. Likewise, a static task-to-PE mapping could unnecessarily stall the parallel instances of the same task. Second, acceleration of the domain-specific applications needs to be oblivious to the application developers to make DSSoCs practical. This paper addresses these two requirements simultaneously.

The task scheduling problem involves assigning tasks to processing elements and ordering their execution to achieve the optimization goals, e.g., minimizing execution time, power dissipation, or energy consumption. To this end, applications are abstracted using mathematical models, such as directed acyclic graph (DAG) and synchronous data graphs (SDG) that capture both the attributes of individual tasks (e.g., expected execution time) and the dependencies among the tasks [31, 4, 26]. Scheduling these tasks to the available PEs is a well-known NP-complete problem [7, 32]. An optimal static schedule can be found for small problem sizes using optimization techniques, such as mixed-integer programming (MIP) [8] and constraint programming (CP) [25]

. These approaches are not applicable to runtime scheduling for two fundamental reasons. First, statically computed schedules lose relevance in a dynamic environment where tasks from multiple applications stream in parallel, and PE utilizations change dynamically. Second, the execution time of these algorithms, hence their overhead, can be prohibitive even for small problem sizes with few tens of tasks. Therefore, a variety of heuristic schedulers, such as shortest job first (SJF) 

[34] and complete fair schedulers (CFS) [22], are used in practice for homogeneous systems. These algorithms trade off the quality of scheduling decisions and computational overhead.

To improve this state of affairs, this paper addresses the following challenging proposition: Can we achieve a scheduler performance close to that of optimal MIP and CP schedulers, while using minimal runtime overhead compared to commonly used heuristics? Furthermore, we investigate this problem in the context of heterogeneous PEs. We note that much of the scheduling in heterogeneous many-core systems is tuned manually, even to date [1]. For example, OpenCL, a widely-used programming model for heterogeneous cores, leaves the scheduling problem to the programmers. Experts manually optimize the task to resource mapping based on their knowledge of application(s), characteristics of the heterogeneous clusters, data transfer costs, and platform architecture. However, manual optimization suffers from scalability for two reasons. First, optimizations do not scale well for all applications. Second, extensive engineering efforts are required to adapt the solutions to different platform architectures and varying levels of concurrency in applications. Hence, there is a critical need for a methodology to provide optimized scheduling solutions applicable to a variety of applications at runtime in heterogeneous many-core systems.

Scheduling has traditionally been considered as an optimization problem [8]. We change this perspective by formulating runtime scheduling for heterogeneous many-core platforms as a classification problem. This perspective and the following key insights

enable us to employ machine learning (ML) techniques to solve this problem:

Key insight 1: One can use an optimal (or near-optimal) scheduling algorithm offline without being limited by computational time and other runtime overheads. Then, the inputs to this scheduler and its decisions can be recorded along with relevant features to construct an Oracle.

Key insight 2: One can design a policy that approximates the Oracle with minimum overhead and use this policy at runtime.

Key insight 3: One can exploit the effectiveness of ML to learn from Oracle with different objectives, which includes minimizing execution time, energy consumption, etc.

Realizing this vision requires addressing several challenges. First, we need to construct an Oracle in a dynamic environment where tasks from multiple applications can overlap arbitrarily, and each incoming application instance observes a different system state. Finding optimal schedules is challenging even offline, since the underlying problem is NP-complete. We address this challenge by constructing Oracles using both CP and a computationally expensive heuristic, called earliest task first (ETF) [12]. ML uses informative properties of the system (features

) to predict the category in a classification problem. The second challenge is identifying the minimal set of relevant features that can lead to high accuracy with minimal overhead. We store a small set of 45 relevant features for a many-core platform with 16 processing elements along with the Oracle to minimize the runtime overhead. This enables us to represent a complex scheduling decision as a set of features and then predict the best processing element for task execution. The final challenge is approximating the Oracle accurately with a minimum implementation overhead. Since runtime task scheduling is a sequential decision-making problem, supervised learning methodologies, such as linear regression and regression tree, may not generalize for unseen states at runtime. Reinforcement learning (RL) and imitation learning (IL) are more effective for sequential decision-making problems 

[29, 17, 27]. Indeed, RL has shown promise when applied to the scheduling problem [18, 19, 36], but it suffers from slow convergence and sensitivity of the reward function [13, 16]. In contrast, IL takes advantage of the expert’s inherent knowledge and produces policies that imitate the expert decisions [28]. Hence, we propose an IL-based framework that schedules incoming applications to heterogeneous multi-core systems.

The proposed IL framework is formulated to facilitate generalization, i.e. it can be adapted to learn from any Oracle that optimizes a specific objective, such as performance and energy efficiency, of an arbitrary DSSoC. We evaluate the proposed framework with six domain-specific applications from wireless communications and radar systems. The proposed IL policies successfully approximate the Oracle with more than 99% accuracy, achieving fast convergence and generalizing to unseen applications. In addition, the scheduling decisions are made within 1.1s (on an Arm A53 core), which is better than CFS performance (1.2s). To the best of our knowledge, this is the first imitation learning-based scheduling framework for heterogeneous many-core systems capable of handling multiple applications exhibiting streaming behavior. The main contributions of this paper are as follows:

  • [leftmargin=*]

  • An imitation learning framework to construct policies for task scheduling in heterogeneous many-core platforms;

  • Oracle design using both optimal and heuristic schedulers for performance- and energy- based optimization objectives;

  • Extensive experimental evaluation of the proposed IL policies along with latency and storage overhead analysis;

  • Performance comparison of IL policies against reinforcement learning and optimal schedules obtained by constraint programming.

The rest of the paper is organized as follows. We review the related work in Section II. Section III provides background information on DAG scheduling and imitation learning. In Section IV, we discuss the proposed methodology, followed by relevant experimental results in Section V. Section VI presents the conclusions and possible future research for this work.

Ii Related Work and Novel Contributions

Fig. 1: (a) An example DAG consisting of 7 tasks (b) A heterogeneous computing platform with 4 processing elements and list of tasks in DAG supported by each PE (c) A sample schedule of the DAG on the heterogeneous many-core system.

Current many-core systems use runtime heuristics to enable scheduling with low overheads. For example, the completely fair scheduler (CFS) [22], widely used in Linux systems, aims to provide fairness for all processes in the system. CFS maintains two queues (active and expired) to manage task scheduling. In addition, CFS gives a fixed time quantum for each process. Tasks are swapped between active and expired queues based on activation and expiration of the time quantum. However, complex heuristics are required to manage such queues. CFS also does not generalize to optimization objectives apart from performance and fairness. More importantly, CFS scheduling is limited to general-purpose cores and lacks support for specialized cores and hardware accelerators [5]. With the same limitations, shortest job first (SJF) [34]

scheduler estimates the task’s CPU processing time and assigns the first available resource to the task with the shortest execution time.

List scheduling techniques [26, 14] for DAGs [31, 6, 2] prioritize various objectives, such as energy [4, 30], fairness [40], security [39]. In general, this technique places the nodes (tasks) of a DAG in a list and provides a PE assignment and order at design time. Heterogeneous earliest finish time (HEFT) [31] is one example, in which an upward rank is computed to perform the scheduling decisions. The authors in [6] use a lookahead algorithm as an enhancement to the HEFT scheduler to improve the execution time, but suffers from fourth order complexity on the number of tasks (). Another recent technique shows improvement in performance with quadratic complexity [2]. However, these algorithms suffer from the time complexity problem and are tailored to particular objectives and fail to generalize to a combination of objectives and choice of applications.

Fig. 2: An overview of the proposed imitation learning framework for task scheduling in heterogeneous many-core systems. The framework integrates the system configuration, profiling information, scheduling algorithms and applications to construct Oracle, and train IL policies for task scheduling. The IL policies, that are improved using DAgger, are then evaluated on the heterogeneous many-core system at runtime.

Machine learning (ML)-based schedulers show promise in eliminating the drawbacks of list scheduling and runtime heuristic techniques. ML-based schedulers possess the capabilities to be further tuned at runtime [18]

. A recent support vector machine (SVM)-based scheduler for OpenCL kernels assigns kernels (tasks) between CPUs and GPUs 

[37]. In contrast to schedulers that use supervised learning, authors in [20]

uses reinforcement learning (RL) to schedule Tensorflow device placement, but lacks the ability of scheduling streaming jobs. DeepRM 

[18] uses deep neural networks with RL for scheduling at an application granularity as opposed to using the notion of DAGs. On the other hand, Decima [19] uses a combination of graph neural networks and RL to perform coarse-grained processing-cluster level scheduling for streaming DAGs.

RL-based scheduling techniques have two major drawbacks. First, they require a significant number of episodes to converge. For example, the technique proposed in [19] takes 50k episodes, with 1.5 seconds each, to converge to a solution that is equivalent to 21 hours of simulation in Nvidia Tesla P100 GPU. Second, the efficiency of an RL-based technique predominantly depends on the choice of the reward function. Usually, the reward function is hand-tuned, depending on the problem under consideration.

To overcome these difficulties, we propose an IL-based scheduling methodology. Since IL uses an Oracle to construct a policy, it does not suffer from slow convergence, as seen in RL. IL-based policies were initially used in robotics to show their fast convergence property [28]. Recently, the use of imitation learning to intelligently manage power and energy consumption in SoCs has been demonstrated [13, 16]. To the best of our knowledge, this is the first approach that applies IL for multi-application streaming task scheduling in heterogeneous many-core platforms.

Iii Background and Overview

The runtime scheduling problem addressed in this paper is illustrated in Fig. 1. We consider streaming applications that can be modeled using directed acyclic graphs, such as the one shown in Fig. 1(a). These applications process data frames that arrive at a varying rate over time. For example, a WiFi-transmitter, one of our domain applications, receives and encodes raw data frames before they are transmitted over the air. Data frames from a single application or multiple simultaneous applications can overlap in time as they go through the tasks that compose the application. For instance, Task-1 in Fig. 1(a) can start processing a new frame, while other tasks continue processing earlier frames. Processing of a frame is said to be completed after the terminal task without any successor (Task-7 in Fig. 1(a)) is executed. We define the application formally to facilitate description of the schedulers.

Definition 1: An application graph is a directed acyclic graph, where each node represents the tasks that compose the application. Directed edge from task to shows that cannot start processing a new frame before the output of reaches for all . for each edge denotes the communication volume over this edge. It is used to account for the communication latency.

Each task in a given application graph can execute on different processing elements in the target DSSoC. We formally define the target DSSoC as follows:

Definition 2: An architecture graph is a directed graph, where each node represents processing elements, and represents the communication links between and in the target SoC. The nodes and links have the following quantities associated with them:

  • is the execution time of task on PE , if can execute (i.e., it supports) .

  • is the communication latency from to for all .

  • is the PE cluster belongs to.

The DSSoC example in Fig. 1(b) assumes one big core cluster, one LITTLE core cluster, and two hardware accelerators each with a single PE in them for simplicity. The low-power (LITTLE) and high-performance (big) general-purpose clusters can support the execution of all tasks, as shown in the supported tasks column in Fig. 1(b). In contrast, hardware accelerators (Acc-1 and Acc-2) support only a subset of tasks.

Task-j Set of Tasks
PE-i Set of PEs
Cluster- Set of clusters
Communication links
between to
Set of
communication links
Execution time of
task on PE
Communication
latency from to
State-s Set of states
Communication volume
from task to
Set of actions
Static features Dynamic features
Apply cluster policy
for state
Apply PE policy
in cluster-c for state
Policy Oracle policy
Policy for many-core
platform configuration G
Oracle for many-core
platform configuration G
TABLE I: Summary of the notations used in this paper

A particular instance of the scheduling problem is illustrated in Fig. 1(c). Task-6 is scheduled to big core (although it executes faster on Acc-2) since Acc-2 is not available at the time of decision making. Similarly, Task-4 is scheduled to the LITTLE core (even if it executes faster on big) because the big core is utilized when Task-4 is ready to execute. In general, scheduling complex DAGs in heterogeneous many-core platforms present a multitude of choices making the runtime scheduling problem highly complex. The complexity increases further with: (1) overlapping DAGs at runtime, (2) executing multiple applications simultaneously, and (3) optimizing for objectives such as performance, energy, etc.

The proposed solution leverages imitation learning, and is outlined in Fig. 2

. It is also referred to as learning by demonstration, and is an adaption of supervised learning for sequential decision making problems. The decision-making space is segmented into distinct decision epochs, called

states (). There exists a finite set of actions for every state . IL uses policies that map each state () to a corresponding action.

Definition 3: Oracle Policy (expert) maps a given system state to the optimal action. In our runtime scheduling problem, the state includes the set of ready tasks and actions that correspond to assignment of tasks to processing elements . Given the Oracle , the goal with imitation learning is to learn a runtime policy that can approximate it. We construct an Oracle offline and approximate it using a hierarchical policy with two levels. Consider a generic heterogeneous many-core platform with a set of clusters , as illustrated in Fig. 2. At the first level, an IL policy chooses one cluster (among clusters) for a task to be executed in.

The first-level policy assigns the ready tasks to one of the clusters in , since each PE within the same cluster has the same static parameters. Then, a cluster-level policy assigns the tasks to a specific PE within that cluster. The details of state representation, Oracle generation, and hierarchical policy design are presented in the next section.

Iv Proposed Methodology and Approach

This section first introduces the system state representation, including the features used by the IL policies. Then, it presents the Oracle generation process, and the design of the hierarchical IL policies. Table I details the notations that will be used hereafter.

Iv-a System State Representation

Offline scheduling algorithms are NP-complete even though they rely on static features, such as average execution times. The complexity of runtime decisions is further exacerbated as the system schedules multiple applications that exhibit streaming behavior. In the streaming scenario, incoming frames do not observe an empty system with idle processors. In strong contrast, PEs have different utilization, and there may be an arbitrary number of partially processed frames in the wait queues of the PEs. Since our goal is to learn a set of policies that generalize to all applications and all streaming intensities, the ability to learn the scheduling decisions critically depends on the effectiveness of state representation. The system state should encompass both static and dynamic aspects of the set of tasks, applications, and the target platform. Naive representations of DAGs include adjacency matrix and adjacency list. However, these representations suffer from drawbacks such as large storage requirements, highly sparse matrices which complicates the training of supervised learning techniques, and scalability for multiple streaming applications. In contrast, we carefully study the factors that influence task scheduling in a streaming scenario and construct features that accurately represent the system state. We broadly categorize the features that make up the state as follows:

  • Task features: This set includes the attributes of individual tasks. They can be both static, such as average execution time of a task on a given PE (), and dynamic, such as the relative order of a task in the queue.

  • Application features: This set describes the characteristics of the entire application. They are static features, such as the number of tasks in the application and the precedence constraints between them.

  • PE features: This set describes the dynamic state of the processing elements. Examples include the earliest available times (readiness) of the PEs to execute tasks.

The static features are determined at the design time, whereas the dynamic features can only be computed at runtime. The static features aid in exploiting design time behavior. For example, helps the scheduler compare the expected performance of different PEs. Dynamic features, on the other hand, present the runtime dependencies between tasks and jobs and also the busy states of the processing elements. For example, the expected time when cluster becomes available for processing adds invaluable information, which is only available at runtime.

Feature Type
Feature Description
Feature Categories
Static
()
ID of task-j in the DAG Task
Execution time of a task
in PE ()
Task
PE
Downward depth of task
in the DAG
Task
Application
IDs of predecessor tasks
of task
Task
Application
Application ID Application
Power consumption of task
in PE
Task
PE
Dynamic
()
Relative order of task in
the ready queue
Task
Earliest time when PEs
in a cluster- are ready
for task execution
PE
Clusters in which predecessor
tasks of task executed
Task
Communication volume from task
to task
Task
TABLE II: Types of features employed for state representation from point of view of task

In summary, the features of a task comprehensively represent the task itself and the state of the processing elements in the system to effectively learn the decisions from the Oracle policy. The specific types of features used in this work to represent the state and their categories are listed in Table II. The static and dynamic features are denoted as and , respectively. Then, we define the systems state at a given time instant using the features in Table II as:

(1)

where and denote the static and dynamic features respectively at a given time instant . For an SoC with 16 processing elements grouped as 5 clusters, we obtain a set of 45 features for the proposed IL technique.

Iv-B Oracle Generation

The goal of this work is to develop generalized scheduling models for streaming applications of multiple types to be executed on heterogeneous many-core systems. The generality of the IL-based scheduling framework enables using IL with any Oracle. The Oracle can be any scheduling algorithm that optimizes an arbitrary metric, such as execution time, power consumption, and total SoC energy.

To generate the training dataset, we implemented both optimal schedulers using CP and heuristics. These schedulers are integrated into a SoC simulation framework, as explained under experimental results. Suppose a new task becomes ready at time . The Oracle is called to schedule the task to a PE. The Oracle policy for this action task with system state can be expressed as:

(2)

where is the PE scheduled to and is the system state defined in Equation 1. After each scheduling action, the particular task that is scheduled (), the system state , and the scheduling decision are added to the training data. To enable the Oracle policies to generalize for different workload conditions, we constructed workload mixes using the target applications at different data rates, as detailed in Section V-A.

Iv-C IL-based Scheduling Framework

1 for task T  do
2        = Get current state for task T
        /* Level-1 IL policy to assign cluster */
3        =
        /* Level-2 IL policy to assign PE */
4        p =
        /* Assign T to the predicted PE */
5       
6 end for
Algorithm 1 Hierarchical imitation learning Framework

This section presents the hierarchical IL-based scheduler for runtime task scheduling in heterogeneous many-core platforms. A hierarchical structure is more scalable since it breaks a complex scheduling problem down into simpler problems. Furthermore, it achieves a significantly higher classification accuracy compared to a flat classifier (>93% versus 55%), as detailed in Section 

V-D.

Our hierarchical IL-based scheduler policies approximate the Oracle with two levels, as outlined in Algorithm 1. The first level policy is a coarse-grained scheduler that assigns tasks into clusters. This is a natural choice since individual PEs within a cluster have identical static parameters, i.e., they differ only in terms of their dynamic states. The second level (i.e., fine-grained scheduling) consists of one dedicated policy for each cluster . These policies assign the input task to a PE within its own cluster, i.e., . We leverage off-the-shelf machine learning techniques, such as regression trees and neural networks, to construct the IL policies. The application of these policies approximates the corresponding Oracle policies constructed offline.

IL policies suffer from error propagation as the state-action pairs in the Oracle are not necessarily i.i.d. (independent and identically distributed). Specifically, if the decision taken by the IL policies at a particular decision epoch is different from the Oracle, then the resultant state for the next epoch is also different with respect to the Oracle. Therefore, the error further accumulates at each decision epoch. This can occur during runtime task scheduling when the policies are applied to applications that the policies did not train with. This problem is addressed by the data aggregation algorithm (DAgger), proposed to improve IL policies [24]. DAgger adds the system state and the Oracle decision to the training data whenever the IL policy makes a wrong decision. Then, the policies are retrained after the execution of the workload.

1 for task T  do
2        = Get current state for task T
3        if  ==  then
4               if  !=  then
5                      Aggregate state and label to the dataset
6               end if
7              
8        end if
9       else
10               Aggregate state and label to the dataset
11               =
12               if  !=  then
13                      Aggregate state and label to the dataset
14               end if
15              
16        end if
       /* Assign T to the predicted PE */
17       
18 end for
Algorithm 2 Methodology to aggregate data in a hierarchical imitation learning framework

DAgger is not readily applicable to the runtime scheduling problem since the number of states is unbounded as a scheduling decision at time for state () can result in any possible resultant state, . In other words, the feature space is continuous, and hence, it is infeasible to generate an exhaustive Oracle offline. We overcome this challenge by generating an Oracle on-the-fly. More specifically, we incorporate the proposed framework into a simulator. The offline scheduler used as the Oracle is called dynamically for each new task. Then, we augment the training data with all the features, Oracle actions, as well as the results of the IL policy under construction. Hence, the data aggregation process is performed as part of the dynamic simulation.

The hierarchical nature of the proposed IL framework introduces one more complexity to data aggregation. The cluster policy’s output may be correct, while the PE cluster reaches a wrong decision (or vice versa). If the cluster prediction is correct, we use this prediction to select the PE policy of that cluster, as outlined in Algorithm 2. Then, if the PE prediction is also correct, the execution continues; otherwise, the PE data is aggregated in the dataset. However, if the cluster prediction does not align with the Oracle, in addition to aggregating the cluster data, an on-the-fly Oracle is invoked to select the PE policy, then the PE prediction is compared to the Oracle, and the PE data is aggregated in case of a wrong prediction.

V Experimental Results

Section V-A presents the experimental methodology and setup. Section V-B explores different machine learning classifiers for IL. The significance of the proposed features is studied using a regression tree classifier in Section V-C. Section V-D presents the evaluation of the proposed IL-scheduler. Section V-E analyzes the generalization capabilities of IL-scheduler. The performance analysis with multiple workloads is presented in Section V-F. We demonstrate the evaluation of the proposed IL technique to energy-based optimization objectives in Section V-G. Section V-H presents comparisons with RL-based scheduler and Section V-I analyzes the complexity of the proposed approach.

V-a Experimental Methodology and Setup

App
# of
Tasks
Execution
Time (s)
Supported
Clusters
Representation
in workload
#frames
#tasks
WiFi-TX 27 301
big, LITTLE, FFT
69 1863
WiFi-RX 34 71
big, LITTLE,
FFT, Viterbi
111 3774
RangeDet 7 177
big, LITTLE, FFT
64 448
SC-TX 8 56 big, LITTLE 64 512
SC-RX 8 154
big, LITTLE,
Viterbi
91 728
TempMit 10 81
big, LITTLE,
Matrix mult.
101 1010
TOTAL 500 8335
TABLE III: Characteristics of applications used in this study and the number of frames of each application in the workload

Domain Applications: The proposed IL scheduling methodology is evaluated using applications from wireless communication and radar processing domains. We employ WiFi-transmitter (WiFi-TX), WiFi-receiver (WiFi-RX), range detection (RangeDet), single-carrier transmitter (SC-TX), single-carrier receiver (SC-RX) and temporal mitigation (TempMit) applications, as summarized in Table III. We construct workload mixes using these applications and run them in parallel.

Fig. 3: Configuration of the heterogeneous many-core platform comprising 16 processing elements, used for scheduler evaluations.

Heterogeneous DSSoC Configuration: Considering the nature of applications, we employ a DSSoC with 16 PEs, including accelerators for the most computationally intensive tasks; they are divided into five clusters with multiple homogeneous PEs, as illustrated in Fig. 3. To enable power-performance trade-off while using general-purpose cores, we include a big cluster with four Arm A57 cores and a LITTLE cluster with four Arm A53 cores. In addition, the DSSoC integrates accelerator clusters for matrix multiplication, FFT, and Viterbi decoder to address the computing requirements of the target domain applications summarized in Table III. The accelerator interfaces are adapted from [15]. The number of accelerator instances in each cluster is selected based on how much the target applications use them. For example, three out of the six reference applications involve FFT, while range detection application alone has three FFT operations. Therefore, we employ four instances of FFT hardware accelerators and two instances of Viterbi and matrix multiplication accelerators, as shown in Fig. 3.

Simulation Framework: We evaluate the proposed IL scheduler using the discrete event-based simulation framework [3], which is validated against two commercial SoCs: Odroid-XU3 [10] and Zynq Ultrascale+ ZCU102 [41]

. This framework enables simulations of the target applications modeled as DAGs under different scheduling algorithms. More specifically, a new instance of a DAG arrives following a specified inter-arrival time rate and distribution, such as an exponential distribution. After the arrival of each DAG instance, called a frame, the simulator calls the scheduler under study. Then, the scheduler uses the information in the DAG and the current system state to assign the ready tasks to the waiting queues of the PEs. The simulator facilitates storing this information and the scheduling decision to construct the Oracle, as described in Section 

IV-B.

The execution times and power consumption for the tasks in our domain applications are profiled on Odroid-XU3 and Zynq ZCU102 SoCs. The simulator uses these profiling results to determine the execution time and power consumption of each task. After all the tasks that belong to the same frame are executed, the processing of the corresponding frame completes. The simulator keeps track of the execution time and energy consumed for each frame. These end-to-end values are within 3%, on average, of the measurements on Odroid-XU3 and Zynq ZCU102 SoCs.

Fig. 4: A comparison of average runtime per scheduling decision for each application with , and ETF schedulers.

Scheduling Algorithms used for Oracle and Comparisons: We developed a CP formulation using IBM ILOG CPLEX Optimization Studio [33] to obtain the optimal schedules whenever the problem size allows. After the arrival of each frame, the simulator calls the CP solver to find the schedule dynamically as a function of the current system state. Since the CP solver takes hours for large inputs (100 tasks), we implemented two versions with one minute (CP) and five minutes (CP) time-out per scheduling decision. When the model fails to find an optimal schedule, we use the best solution found within the time limit. Fig. 4 shows that the average time of the CP solver per scheduling decision for the benchmark applications is about 0.8 seconds and 3.5 seconds, respectively, based on the time limit. Consequently, one entire simulation can take up to 2 days, even with a time-out.

We also implemented the ETF heuristic scheduler, which goes over all tasks and possible assignments to find the earliest finish time considering communication overheads. Its average execution time is close to 0.3 ms, which is still prohibitive for a runtime scheduler, as shown in Fig. 4. However, we observed that it performs better than CP and marginally worse than CP, as we detail in Section V-D.

Oracle generation with the CP formulation is not practical for two reasons. First, it is possible that for small input sizes (e.g., less than ten tasks), there might be multiple (incumbent) optimal solutions, and CP would choose one of them randomly. The other reason is that for large input sizes, CP terminates at the time limit providing the best solution found so far, which is sub-optimal. The sub-optimal solutions produced by CP vary based on the problem size and the limit. In contrast, ETF is easier to imitate at runtime and its results are within 8.2% of CP results. Therefore, we use ETF as the Oracle policy in our experiments and use the results of CP schedulers as reference points. We train IL policies for this Oracle in Section V-B and evaluate their performance in Section V-D.

Classifier
Cluster
Policy
LITTLE
Policy
big
Policy
MatMult
Policy
FFT
Policy
Viterbi
Policy
RT 99.6 93.8 95.1 99.9 99.5 100
SVC 95.0 85.4 89.9 97.8 97.5 98.0
LR 89.9 79.1 72.0 98.7 98.2 98.0
NN 97.7 93.3 93.6 99.3 98.9 98.1
TABLE IV: Classification accuracies of trained IL policies with different machine learning classifiers.
Classifier Latency (s) Storage (KB)
Odroid-XU3
(Arm A15)
Zynq Ultrascale+ ZCU102
(Arm A53)
RT 1.1 1.1 19.3
NN 14.4 37 16.9
TABLE V: Execution time and storage overheads per IL policy for regression tree and neural network classifiers.

V-B Exploring Different Machine Learning Classifiers for IL

We explore various ML classifiers within the IL methodology to approximate the Oracle policy. One of the key metrics that drive the choice of machine learning techniques is the classification accuracy of the IL

policies. At the same time, the policy should also have a low storage and execution time overheads. We evaluate the following algorithms for classification accuracy and implementation efficiency: regression tree (RT), support vector classifier (SVC), logistic regression (LR), and a multi-layer perceptron neural network (NN) with 4 hidden layers and 32 neurons in each hidden layer.

The classification accuracy of ML algorithms under study are listed in Table IV. In general, all classifiers achieve a high accuracy to choose the cluster (the first column). At the second level, they choose the correct PE with high accuracy (>97%) within the hardware accelerator clusters. However, they have lower accuracy and larger variation for the LITTLE and big clusters (highlighted columns). This is intuitive as the LITTLE and big clusters can execute all types of tasks in the applications, whereas accelerators execute fewer tasks. In strong contrast, a flat policy, which directly predicts the PE, results in training accuracy with 55% at best. Therefore, we focus on the proposed hierarchical IL methodology.

Features Excluded
from Training
Cluster
Policy
LITTLE
Policy
big
Policy
MatMul
Policy
FFT
Policy
Viterbi
Policy
None 99.6 93.8 95.1 99.9 99.5 100
Static features 87.3 93.8 92.7 99.9 99.5 100
Dynamic features 88.7 52.1 57.6 94.2 70.5 98
PE availability times 92.2 51.1 61.5 94.1 66.7 98.1
Task ID, depth, app. ID 90.9 93.6 95.3 99.9 99.5 100
TABLE VI: Training accuracy of IL policies with subsets of the proposed feature set

Regression trees (RT) trained with a maximum depth of 12 produce the best accuracy for the cluster and PE policies, with more than 99.5% accuracy for the cluster and hardware acceleration policies. RT also produces an accuracy of 93.8% and 95.1% to predict PEs within the LITTLE and big clusters, respectively, which is the highest among all the evaluated classifiers. The classification accuracy of NN policies are comparable to RT, with a slightly lower cluster prediction accuracy of 97.7%. In contrast, support vector classifiers (SVC) and logistic regression (LR) are not preferred due to lower accuracy of less than 90% and 80%, respectively, to predict PEs within LITTLE and big clusters.

Fig. 5: Average execution time comparison of the applications with Oracle, IL (Proposed) and IL policies with subsets of features. As shown, the average execution time with IL closely follows the Oracle.

We choose regression trees and neural networks to analyze the latency and storage overheads (due to their superior performance). The latency of RT is 1.1s on Arm Cortex-A15 in Odroid-XU3 and on Arm Cortex-A53 in Zynq ZCU102, as shown Table V. In comparison, the scheduling overhead of CFS, the default Linux scheduler, on Zynq ZCU102 running Linux Kernel 4.9 is 1.2s, which is slightly larger than our solution. The storage overhead of an RT policy is 19.33 KB. The NN policies incur an overhead of 14.4s on the Arm Cortex-A15 cluster in Odroid-XU3 and 37s on Arm Cortex-A53 in Zynq, with a storage overhead of 16.89 KB. NNs are preferable for use in an online environment as their weights can be incrementally updated using the back-propagation algorithm. However, due to competitive classification accuracy and lower latency overheads of RTs over NNs, we choose RT for the rest of the experiments.

V-C Feature Space Exploration with Regression Tree Classifier

This section explores the significance of the features chosen to represent the state. For this analysis, we assess the impact of the input features on the training accuracy with RT classifier and average execution time following a systematic approach.

The training accuracy with subsets of features and the corresponding scheduler performance is shown in Table V-B and Fig. 5 respectively. First, we exclude all static features from the training dataset. The training accuracy for the prediction of the cluster significantly drops by 10%. Since we use hierarchical IL policies, an incorrect first-level decision results in a significant penalty for the decisions at the next level. Second, we exclude all dynamic features from training. This results in a similar impact for the cluster policy (10%) but significantly affects the policies constructed for the LITTLE, big, and FFT. Next, a similar trend is observed when PE availability times are excluded from the feature set. The accuracy is marginally higher since the other dynamic features contribute to learning the scheduling decisions. Finally, we remove a few task related features such as the downward depth, task, and application identifier. In this case, the impact is to the cluster policy accuracy since these features describe the node in the DAG and influence the cluster mapping.

As observed in Fig. 5, the average execution time of the workload significantly degrades when all features are not included. Hence, the chosen features help to construct effective IL policies, approximating the Oracle with over 99% accuracy in execution time.

Fig. 6: Comparison of average job execution time between Oracle, CP solutions, and imitation learning policies to schedule a workload comprising a mix of six streaming applications. IL scheduler policies with baseline-IL (before DAgger) and with IL-DAgger (Proposed) are shown in the comparison.

V-D IL-Scheduler Performance Evaluation

This section compares the performance of the proposed policy to the ETF Oracle, CP, and CP. Since heterogeneous many-core systems are capable of running multiple applications simultaneously, we stream the frames in our application mix (see Table III) with increasing injection rates. For example, a normalized throughput of 1.0 in Fig. 6 corresponds to 19.78 frames/ms. Since the frames are injected faster than they can be processed, there are many overlapping frames at any given time.

First, we train the IL policies with all six reference applications and refer to this as the baseline-IL scheduler. IL policies suffer from error propagation due to the non i.i.d. nature of training data. To overcome this limitation, we use a data aggregation technique adapted for a hierarchical IL framework (IL-DAgger), as discussed in Section IV-C. A DAgger iteration involves executing the entire workload. We execute ten DAgger iterations and choose the best iteration with performance within 2% of the Oracle. If we fail to achieve the target, we continue to perform more iterations.

Fig. 6 shows that the proposed IL-DAgger scheduler performs almost identical to the Oracle; the mean average percentage difference between them is 1%. More notably, the gap between the proposed IL-DAgger policy and the optimal CP solution is only 9.22%. We emphasize that CP is included only as a reference point, but it has six orders of magnitude larger execution time overhead and cannot be used at runtime. Furthermore, the proposed approach performs better than CP, which is not able to find a good schedule within the one-minute time limit per decision. Finally, we note that the baseline IL can approach the performance of the proposed policy. This is intuitive since both policies are tested on known applications in this experiment. This is in contrast to the leave one out experiments presented in Section V-E.

Pulse Doppler Application Case Study: We demonstrate the applicability of the proposed IL-scheduling technique in complex scenarios using a pulse Doppler application. It is a real-world radar application, which computes the velocity of a moving target object. This application is significantly more complex, with 13-64 more tasks than the other applications. Specifically, it consists of 449 tasks comprising 192 FFT tasks, 128 inverse-FFT tasks, and 129 other computations. The FFT and inverse-FFT operations can execute on the general-purpose cores and hardware accelerators. In contrast, the other tasks can execute only on the general-purpose cores.

Fig. 7: Average slowdown of IL policies in comparison with Oracle for leave-one-out (LOO) experiments before and after DAgger (Proposed).
Fig. 8: Average execution time with Oracle, IL-DAgger (all applications are included for training), IL with one application excluded from training (IL-LOO) and finally, the leave-one-out policy improved with DAgger (Proposed IL-LOO-DAgger) technique. The excluded applications are: (a) WiFi-TX, (b) WiFi-RX, (c) Range Detection (d) Single-Carrier TX (e) Single-Carrier RX and (f) Temporal Mitigation.

The proposed IL policies achieve an average execution time within 2% of the Oracle. The 2% error is acceptable, considering that the application saturates the computing platform quickly due to its high complexity. Moreover, the CP-based approach does not produce a viable solution either with 1-minute or 5-minute time limits due to the large problem size. For this reason, this application is not included in our workload mixes and the rest of the comparisons.

V-E Illustration of Generalization with IL for Unseen Applications, Runtime Variations and Platforms

This section analyzes the generalization of the proposed IL-based scheduling approach to unseen applications, runtime variations, and many-core platform configurations.

IL-Scheduler Generalization to Unseen Applications using Leave-one-out Experiments: IL, being an adaptation of supervised learning for sequential decision making, suffers from lack of generalization to unseen applications. To analyze the effects of unseen applications, we train IL policies, excluding applications one each at a time from the training dataset [35].

To compare the performances of two schedulers and , we use the job slowdown metric . when  [18]. The average slowdown of scheduler with respect to scheduler is computed as the average slowdown for all jobs at all injection rates. The results present an interesting and intuitive explanation of the average job slowdown in execution times for each of the leave-one-out experiments.

Fig. 7 shows the average slowdown of the baseline IL (IL-LOO) and proposed policy with DAgger iterations (IL-LOO-DAgger) with respect to the Oracle. We observe that the proposed policy outperforms the baseline IL for all applications, with the most significant gains obtained for WiFi-RX and SC-RX applications. These two applications consist of a Viterbi decoder operation, which is very expensive to compute on general-purpose cores and highly efficient to compute on hardware accelerators. When these applications are excluded, the IL policies are not exposed to the corresponding states in the training dataset and make incorrect decisions. The erroneous PE assignments lead to an average slowdown of more than 2 for the receiver applications. The slowdown when the transmitter applications (WiFi-TX and SC-TX) are excluded from training is approximately 1.13. Range detection and temporal mitigation applications experience a slowdown of 1.25 and 1.54, respectively, for leave-one-out experiments. The extent of the slowdown in each scenario depends on the application excluded from training and its execution time profile in the different processing clusters. In summary, the average slowdown of all leave-one-out IL policies after DAgger (IL-LOO-DAgger) improves to ~1.01 in comparison with the Oracle, as shown in Fig. 7.

Fig. 8(a)-(f) show the average job execution times for the Oracle (ETF), baseline-IL, IL with leave-one-out and DAgger for IL with leave-one-out policies for each of the applications. The highest number of DAgger iterations needed was 8 for SC-RX application, and the lowest was 2 for range detection application. If the DAgger criterion is relaxed to achieving a slowdown of 1.02, all applications achieve the same in less than 5 iterations. A drastic improvement in the accuracy of the IL policies with few iterations shows that the policies generalize quickly and well to unseen applications, thus making them suitable for applicability at runtime.

IL-Scheduler Generalization with Runtime Variations:

Application WiFi-TX WiFi-RX RangeDet SC-TX SC-RX TempMit
Zynq ZCU-102 0.34 0.56 0.66 1.15 1.80 0.63
Odroid-XU3 6.43 5.04 5.43 6.76 7.14 3.14
TABLE VII: Standard deviation (in percentage of execution time) profiling of applications in Odroid-XU3 and Zynq ZCU-102.
Platform
Config.
LITTLE
PEs
big
PEs
MatMul
Acc. PEs
FFT
Acc. PEs
Decoder
Acc. PEs
G1 (Baseline) 4 4 2 4 2
G2 2 2 2 2 2
G3 1 1 1 1 1
G4 4 4 1 1 1
G5 4 4 0 0 0
TABLE VIII: Configuration of many-core platforms.

Tasks experience runtime variations due to variations in system workload, memory, and congestion. Hence, it is crucial to analyze the performance of the proposed approach when tasks experience such variations, rather than observing only their static profiles. Our simulator accounts for variations by using a Gaussian distribution to generate variations in execution time 

[38]. To allow evaluation in a realistic scenario, all tasks in every application are profiled on big and LITTLE cores of Odroid-XU3, and, on Cortex-A53 cores and hardware accelerators on Zynq for variations in execution time.

Fig. 9: IL policy evaluation with multiple many-core platform configurations. IL policies are trained with only configuration G1.
Fig. 10: Comparison of average job slowdown normalized with IL-DAgger (Proposed) policies against the Oracle for 50 different workloads. The slowdown of IL-DAgger policies are shown for workloads with different intensities of each application in the benchmark suite.

We present the average standard deviation as a ratio of execution time for the tasks in Table VII. The maximum standard deviation is less than 2% of the execution time for the Zynq platform, and less than 8% on the Odroid-XU3. To account for variations in runtime, we add a noise of 1%, 5%, 10%, and 15% in task execution time during simulation. The IL policies achieve average slowdowns of less than 1.01 in all cases of runtime variations. Although IL policies are trained with static execution time profiles, the aforementioned results demonstrate that the IL policies adapt well to execution time variations at runtime. Similarly, the policies also generalize to variations in communication time and power consumption.

IL-Scheduler Generalization with Platform Configuration: This section presents a detailed analysis of the IL policies by varying the configuration i.e., the number of clusters, general-purpose cores, and hardware accelerators. To this end, we choose five different SoC configurations presented in Table VIII. The Oracle policy for a configuration G1 is denoted by . An IL policy evaluated on configuration G1 is denoted as . G1 is the baseline configuration that is used for extensive evaluation. Between configurations G1–G4, we vary the number of PEs within each cluster. We also consider a degenerate case that comprises only LITTLE and big clusters (configuration G5). We train IL policies with only configuration G1. The average execution times of , , and are within 1%, performs within 2%, and performs within 3%, of their respective Oracles. The accuracy of with respect to the corresponding Oracle () is slightly lower (97%) as the platform saturates the computing resources very quickly, as shown in Fig. 9.

Fig. 11: (a) Average execution time and (b) average energy consumption of the workload with Oracles and IL policies for performance, energy, energy-delay product (EDP) and energy-delay product (EDP) objectives.

Based on these experiments, we observe that the IL policies generalize well for the different many-core platform configurations. The change in system configuration is accurately captured in the features (in execution times, PE availability times, etc.), which enables us to generalize well to new platform configurations. When the cluster configuration in the many-core platform changes, the IL policies generalize well (within 3%) but can also be improved by using DAgger to obtain improved performance (within 1% of the Oracle).

V-F Performance Analysis with Multiple Workloads

To demonstrate the generalization capability of the IL policies trained and aggregated on one workload (IL-DAgger), we evaluate the performance of the same policies on 50 different workloads consisting of different combinations of application mixes at varying injection rates, and each of these workloads contains 500 frames. For this extensive evaluation, we consider workloads each of which are intensive on one of WiFi-TX, WiFi-RX, range detection, SC-TX, SC-RX, and temporal mitigation. Finally, we also consider workloads in which all applications are distributed similarly.

Fig. 10 presents the average slowdown for each of the 50 different workloads (represented as W-1, W-2 and so on). While W-22 observes a slowdown of 1.01 against the Oracle, all other workloads experience an average slowdown of less than 1.01 (within 1% of Oracle). Independent of the distribution of the applications in the workloads, the IL policies approximate the Oracle well. On average, the slowdown is less than 1.01, demonstrating the IL policies generalize to different workloads and streaming intensities.

V-G Evaluation with Energy and Energy-Delay Objectives

Average execution time is crucial in configuring computing systems for meeting application latency requirements and user experience. Another critical metric in modern computing systems, especially battery-powered platforms, is energy consumption [21, 23]. Hence, this section presents the proposed IL-based approach with the following objectives: performance, energy, energy-delay product (EDP), and energy-delay product (EDP). We adapt ETF to generate Oracles for each objective. Then, the different Oracles are used to train IL policies for the corresponding objectives. The scheduling decisions are significantly more complex for these Oracles. Hence, we use an RT of depth 16 (execution time uses RT of depth 12) to learn the decisions accurately. The average latency per scheduling decision remains similar for RT of depth 16 (~1.1s) on Cortex-A53.

Fig. 11(a) and Fig. 11(b) present the average execution time and average energy consumption, respectively, for IL policies with different objectives. The lowest energy is achieved by the energy Oracle, while it increases as more emphasis is added to performance (EDP EDP performance), as expected. The average execution time and energy consumption in all cases are within 1% of the corresponding Oracles. This demonstrates the proposed IL scheduling approach is powerful as it learns from Oracles that optimize for any objective.

V-H Comparison with Reinforcement Learning

Fig. 12: Comparison of average execution time between Oracle, IL, and RL policies to schedule a workload comprising a mix of six streaming real-world applications.

Since the state-of-the-art machine learning techniques [18, 19] do not target streaming DAG scheduling in heterogeneous many-core platforms, we implemented a policy-gradient based reinforcement learning technique using a deep neural network (multi-layer perceptron with 4 hidden layers with 32 neurons in each hidden layer) to compare with the proposed IL-based task scheduling technique. For the RL implementation, we vary the exploration rate between 0.01 to 0.99 and learning rate from 0.001 to 0.01. The reward function is adapted from [19]. RL starts with random weights and then updates them based on the extent of exploration, exploitation, learning rate, and reward function. These factors affect convergence and quality of the learned RL models.

Fewer than 20% of the experiments with RL converge to a stable policy and less than 10% of them provide competitive performance compared to the proposed IL-scheduler. We choose the RL solution that performs best to compare with the IL-scheduler. The Oracle generation and training parts of the proposed technique take 5.6 minutes and 4.5 minutes, respectively, when running on an Intel Xeon E5-2680 processor at 2.40 GHz. In contrast, an RL-based scheduling policy that uses the policy gradient method converges in 300 minutes on the same machine. Hence, the proposed technique is 30 faster than RL. As shown in Fig. 12, the RL scheduler performs within 11% of the Oracle, whereas the IL scheduler presents average execution time that is within 1% of the Oracle.

In general, RL-based schedulers suffer from the following drawbacks: (1) need for excessive fine-tuning of the parameters (learning rate, exploration rate, and NN structure), (2) reward function design, and (3) slow convergence for complex problems. In strong contrast, IL policies are guided by strong supervision eliminating the slow convergence problem and the need for a reward function.

V-I Complexity Analysis of the Proposed Approach

In this section, we compare the complexity of our proposed IL-based task scheduling approach with ETF, which is used to construct the Oracle policies. The complexity of ETF is  [12], where is the number of tasks and is the number of PEs in the system. While ETF is suitable for use in Oracle generation (offline), it is not efficient for online use due to the quadratic complexity on the number of tasks. However, the proposed IL-policy which uses regression tree has the complexity of . Since the complexity of the proposed IL-based policies is linear, it is practical to implement in heterogeneous many-core systems.

Vi Conclusion and Future Work

Efficient task scheduling in heterogeneous many-core platforms is crucial to improve the system performance, but is very challenging due to its NP-hardness. In this work, we have presented an imitation learning based approach for task scheduling in many-core platforms executing streaming applications from wireless communications and radar systems. We have presented a hierarchical imitation learning framework that learns from an Oracle to develop task scheduling policies to minimize the execution time of applications. The framework has been evaluated comprehensively with six domain-specific applications and analyzed the storage and latency overheads of the IL policies. We have shown that the IL policies approximate the Oracle better than 99%. The overhead of the policies are significantly low at 1.1s latency per scheduling decision and lower than the completely fair scheduler (1.2s). Our IL policies achieve application execution times within 9.3% of optimal schedules obtained offline using constraint programming.

Preliminary experiments in which we have used IL to bootstrap RL for task scheduling in heterogeneous many-core platforms, show much faster convergence to optimal policies. We envision this work to initiate a new direction in scheduling studies with optimal Oracle generation and evaluation with applications from various domains.

References

  • [1] A. M. Aji, A. J. Peña, P. Balaji, and W. Feng (2016) MultiCL: Enabling Automatic Scheduling for Task-Parallel Workloads in OpenCL. Parallel Computing 58, pp. 37–55. Cited by: §I.
  • [2] H. Arabnejad and J. G. Barbosa (2013) List Scheduling Algorithm for Heterogeneous Systems by an Optimistic Cost Table. IEEE Trans. on Parallel and Distributed Systems 25 (3), pp. 682–694. Cited by: §II.
  • [3] S. E. Arda et al. (2020) DS3: A System-Level Domain-Specific System-on-Chip Simulation Framework. IEEE Transactions on Computers 69 (8), pp. 1248–1262. Cited by: §V-A.
  • [4] S. Baskiyar and R. Abdel-Kader (2010) Energy Aware DAG Scheduling on Heterogeneous Systems. Cluster Computing 13 (4), pp. 373–383. Cited by: §I, §II.
  • [5] T. Beisel, T. Wiersema, C. Plessl, and A. Brinkmann (2011) Cooperative Multitasking for Heterogeneous Accelerators in the Linux Completely Fair Scheduler. In IEEE International Conference on Application-Specific Systems, Architectures and Processors, pp. 223–226. Cited by: §II.
  • [6] L. F. Bittencourt, R. Sakellariou, and E. R. Madeira (2010) DAG Scheduling Using a Lookahead Variant of the Heterogeneous Earliest Finish Time Algorithm. In Euromicro Conference on Parallel, Distributed and Network-based Processing, pp. 27–34. Cited by: §II.
  • [7] M. R. Garey and D. S. Johnson (1979) Computers and intractability: a guide to the theory of np-completeness. W. H. Freeman & Co., USA. External Links: ISBN 0716710447 Cited by: §I.
  • [8] V. Goel, M. Slusky, W. van Hoeve, K. C. Furman, and Y. Shao (2015) Constraint Programming for LNG Ship Scheduling and Inventory Management. European Journal of Operational Research 241 (3), pp. 662–673. Cited by: §I, §I.
  • [9] D. Green et al. (2018) Heterogeneous Integration at DARPA: Pathfinding and Progress in Assembly Approaches. ECTC, May. Cited by: §I.
  • [10] Hardkernel (2014) ODROID-XU3. Note: https://wiki.odroid.com/old_product/odroid-xu3/odroid-xu3 Accessed 20 Mar. 2020 Cited by: §V-A.
  • [11] J. L. Hennessy and D. A. Patterson (2019) A New Golden Age for Computer Architecture. Commun. of the ACM 62 (2), pp. 48–60. Cited by: §I.
  • [12] J. Hwang, Y. Chow, F. D. Anger, and C. Lee (1989) Scheduling Precedence Graphs in Systems with Interprocessor Communication Times. SIAM Journal on Computing 18 (2), pp. 244–257. Cited by: §I, §V-I.
  • [13] R. G. Kim et al. (2017) Imitation Learning for Dynamic VFI Control in Large-Scale Manycore Systems. IEEE Trans. on Very Large Scale Integration (VLSI) Systems 25 (9), pp. 2458–2471. Cited by: §I, §II.
  • [14] Y. Kwok and I. Ahmad (1996) Dynamic Critical-Path Scheduling: An Effective Technique for Allocating Task Graphs to Multiprocessors. IEEE Trans. Parallel Distrib. Syst. 7 (5), pp. 506–521. Cited by: §II.
  • [15] J. Mack, N. Kumbhare, U. Y. Ogras, and A. Akoglu (2020) User-Space Emulation Framework for Domain-Specific SoC Design. arXiv preprint arXiv:2004.01636. Cited by: §V-A.
  • [16] S. K. Mandal, G. Bhat, J. R. Doppa, P. P. Pande, and U. Y. Ogras (2020) An Energy-Aware Online Learning Framework for Resource Management in Heterogeneous Platforms. ACM Transactions on Design Automation of Electronic Systems (TODAES) 25 (3), pp. 1–26. Cited by: §I, §II.
  • [17] S. K. Mandal, G. Bhat, C. A. Patil, J. R. Doppa, P. P. Pande, and U. Y. Ogras (2019) Dynamic Resource Management of Heterogeneous Mobile Platforms via Imitation Learning. IEEE Trans. on Very Large Scale Integration (VLSI) Systems. Cited by: §I.
  • [18] H. Mao, M. Alizadeh, I. Menache, and S. Kandula (2016) Resource Management with Deep Reinforcement Learning. In ACM Workshop on Hot Topics in Networks, pp. 50–56. Cited by: §I, §II, §V-E, §V-H.
  • [19] H. Mao, M. Schwarzkopf, S. B. Venkatakrishnan, Z. Meng, and M. Alizadeh (2019) Learning Scheduling Algorithms for Data Processing Clusters. In ACM Special Interest Group on Data Communication, pp. 270–288. Cited by: §I, §II, §II, §V-H.
  • [20] A. Mirhoseini et al. (2017) Device Placement Optimization with Reinforcement Learning. In International Conference on Machine Learning-Volume 70, pp. 2430–2439. Cited by: §II.
  • [21] K. Moazzemi, B. Maity, S. Yi, A. M. Rahmani, and N. Dutt (2019) HESSLE-FREE: Heterogeneous Systems Leveraging Fuzzy Control for Runtime Resource Management. ACM Transactions on Embedded Computing Systems (TECS) 18 (5s), pp. 1–19. Cited by: §V-G.
  • [22] C. S. Pabla (2009) Completely Fair Scheduler. Linux Journal (184). Cited by: §I, §II.
  • [23] B. K. Reddy, A. K. Singh, D. Biswas, G. V. Merrett, and B. M. Al-Hashimi (2017) Inter-Cluster Thread-to-Core Mapping and DVFS on Heterogeneous Multi-Cores. IEEE Transactions on Multi-Scale Computing Systems 4 (3), pp. 369–382. Cited by: §V-G.
  • [24] S. Ross, G. Gordon, and D. Bagnell (2011) A Reduction of Imitation Learning and Structured Prediction To No-Regret Online Learning. In Proc. of the Int. Conf. on Art. Intel. and Stat., pp. 627–635. Cited by: §IV-C.
  • [25] F. Rossi, P. Van Beek, and T. Walsh (2006) Handbook of Constraint Programming. Elsevier. Cited by: §I.
  • [26] R. Sakellariou and H. Zhao (2004) A Hybrid Heuristic for DAG Scheduling on Heterogeneous Systems. In Int. Parallel and Distributed Processing Symposium, pp. 111. Cited by: §I, §II.
  • [27] A. L. Sartor, A. Krishnakumar, S. E. Arda, U. Y. Ogras, and R. Marculescu (2020) HiLITE: Hierarchical and Lightweight Imitation Learning for Power Management of Embedded SoCs. IEEE Computer Architecture Letters 19 (1), pp. 63–67. Cited by: §I.
  • [28] S. Schaal (1999) Is Imitation Learning the Route To Humanoid Robots?. Trends in Cognitive Sciences 3 (6), pp. 233–242. Cited by: §I, §II.
  • [29] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour (2000) Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Advances in Neural Information Processing Systems, pp. 1057–1063. Cited by: §I.
  • [30] V. Swaminathan and K. Chakrabarty (2001) Real-Time Task Scheduling for Energy-Aware Embedded Systems. Journal of the Franklin Institute 338 (6), pp. 729–750. Cited by: §II.
  • [31] H. Topcuoglu, S. Hariri, and M. Wu (2002) Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing. IEEE Trans. on Parallel and Distrib. Syst. 13 (3), pp. 260–274. Cited by: §I, §II.
  • [32] J.D. Ullman (1975) NP-Complete Scheduling Problems. Journal of Computer and System Sciences 10 (3), pp. 384 – 393. External Links: ISSN 0022-0000, Document Cited by: §I.
  • [33] (2009) V12.8: User’s Manual for CPLEX. International Business Machines Corporation 46 (53), pp. 157. Cited by: §V-A.
  • [34] M. Vasile, F. Pop, R. Tutueanu, V. Cristea, and J. Kołodziej (2015) Resource-Aware Hybrid Scheduling Algorithm in Heterogeneous Distributed Computing. Future Generation Computer Systems 51, pp. 61–71. Cited by: §I, §II.
  • [35] A. Vehtari, A. Gelman, and J. Gabry (2017) Practical Bayesian Model Evaluation using Leave-one-out Cross-validation and WAIC. Statistics and Computing 27 (5), pp. 1413–1432. Cited by: §V-E.
  • [36] Y. Wang et al. (2019) Multi-Objective Workflow Scheduling with Deep-Q-Network-based Multi-Agent Reinforcement Learning. IEEE Access 7, pp. 39974–39982. Cited by: §I.
  • [37] Y. Wen, Z. Wang, and M. F. O’Boyle (2014) Smart Multi-Task Scheduling for OpenCL Programs on CPU/GPU Heterogeneous Platforms. In International Conf. on High Performance Computing, pp. 1–10. Cited by: §II.
  • [38] C. Xian, Y. Lu, and Z. Li (2008) Dynamic Voltage Scaling for Multitasking Real-time Systems with Uncertain Execution Time. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 27 (8), pp. 1467–1478. Cited by: §V-E.
  • [39] T. Xiaoyong, K. Li, Z. Zeng, and B. Veeravalli (2010) A Novel Security-Driven Scheduling Algorithm for Precedence-Constrained Tasks in Heterogeneous Distributed Systems. IEEE Transactions on Computers 60 (7), pp. 1017–1029. Cited by: §II.
  • [40] G. Xie, G. Zeng, L. Liu, R. Li, and K. Li (2016) Mixed Real-Time Scheduling of Multiple DAGs-based Applications on Heterogeneous Multi-core Processors. Microprocessors and Microsystems 47, pp. 93–103. Cited by: §II.
  • [41] Zynq ZCU102 Evaluation Kit. Note: https://www.xilinx.com/products/boards-and-kits/ek-u1-zcu102-g.html, Accessed 10 April 2020 Cited by: §V-A.