Log In Sign Up

Exploiting RapidWright in the Automatic Generation of Application-Specific FPGA Overlays

Overlay architectures implemented on FPGA devices have been proposed as a means to increase FPGA adoption in general-purpose computing. They provide the benefits of software such as flexibility and programmability, thus making it easier to build dedicated compilers. However, existing overlays are generic, resource and power hungry with performance usually an order of magnitude lower than bare metal implementations. As a result, FPGA overlays have been confined to research and some niche applications. In this paper, we introduce Application-Specific FPGA Overlays (AS-Overlays), which can provide bare-metal performance to FPGA overlays, thus opening doors for broader adoption. Our approach is based on the automatic extraction of hardware kernels from data flow applications. Extracted kernels are then leveraged for application-specific generation of hardware accelerators. Reconfiguration of the overlay is done with RapidWright which allows to bypass the HDL design flow. Through prototyping, we demonstrated the viability and relevance of our approach. Experiments show a productivity improvement up to 20x compared to the state of the art FPGA overlays, while achieving over 1.33x higher Fmax than direct FPGA implementation and the possibility of lower resource and power consumption compared to bare metal.


Automatic Generation of Application-Specific FPGA Overlays with RapidWright

Overlay architectures implemented on FPGA devices have been proposed as ...

A Survey of System Architectures and Techniques for FPGA Virtualization

FPGA accelerators are gaining increasing attention in both cloud and edg...

SIFO: Secure Computational Infrastructure using FPGA Overlays

Secure Function Evaluation (SFE) has received recent attention due to th...

FPGA-Based Bandwidth Selection for Kernel Density Estimation Using High Level Synthesis Approach

FPGA technology can offer significantly higher performance at much lower...

ApproxFPGAs: Embracing ASIC-Based Approximate Arithmetic Components for FPGA-Based Systems

There has been abundant research on the development of Approximate Circu...

High Throughput Multidimensional Tridiagonal Systems Solvers on FPGAs

We present a design space exploration for synthesizing optimized, high-t...

I Introduction

Over the past decades, FPGAs have continuously matured and now contain millions of logic gates, thousands of DSP blocks, megabytes of BRAMs, and other types of resources. This development opens doors to unprecedented hardware acceleration in several computing domains such as deep learning, image and scientific processing, and cloud computing. For instance, Xilinx recently released the U

Alveo card powered by UltraScale+ FPGAs for data center and artificial intelligence acceleration. The U

gathers four super logic regions each containing approximately logic elements, MB of BRAM, MB of UltraRAM, and DSP slices [25][29]. The Intel Arria 10 in Microsoft Cloud delivers about million logic elements, DSP logics, and MB of BRAM [22][11]. Nevertheless, these feature improvements have not translated into widespread use of FPGAs. One reason is that designing for FPGAs remains a challenging endeavour including required hardware expertise and long compilation time, which limits the efficient use of FPGA accelerators to niche disciplines involving highly skilled hardware engineers.

To help addressing that limitation, High Level Synthesis (HLS) have been proposed [26][8]. It focuses on high-level functionality rather than low level implementation. However, the hardware expertise and the prohibitive compilation times (especially placement and routing) still limit productivity and mainstream adoption. The need to make FPGAs more accessible to application developers who are accustomed to software API abstractions and fast development cycles therefore remains.

FPGA overlays have been developed to promote FPGAs to a wider user community and for increased design productivity. In general, overlays use coarse-grained processors, which can be programmed from a function call, in a D intercommunication infrastructure that allows parallel processing and data exchange among the processors [20][2][10][18][21]. The software nature of the coarse-grained processors makes it possible to develop efficient compilers for automatic mapping of sequential applications, thus increasing their acceptance in the software community.

Unfortunately, these advantages come at the cost of area and performance, limiting overlays to relatively small to moderate applications. Indeed, FPGA overlays are usually an order of magnitude slower than bare metal implementations, and consume way more resource and power. Because the main purpose of FPGAs is hardware acceleration, overlays have therefore not been able to breakthrough.

In this work, we introduce Application-Specific FPGA Overlays (AS-Overlays), a novel form of FPGA overlays designed for Data Flow Applications. AS-Overlays provide the flexibility of state-of-the-art overlays on one hand and bare metal performance on the other hand. It leverage application specific architectural components for efficient bare metal implementation of functions needed by run-time applications, effectively eliminating the intermediate layers of conventional overlays. We propose an approach for automatic generation of overlay kernels from applications. The proposed approach differs from traditional HLS in that kernels are identified from a set of high-level programming language (HLPL) applications, with no hardware description language (HDL) generation and no usage of domain-specific language (DSL). Specifically, our contribution includes :

  1. [(1)]

  2. An application-specific FPGA overlay generation flow for productivity, performance, and power consumption improvement.

  3. An automatic identification of application kernels through intermediate representation inspection using the Low Level Virtual Machine (LLVM) [16], a compilation and code instrumentation framework.

  4. Systematic hardware generation from identified kernels using RapidWright to shorten design cycles and generate tailored netlists [17].

In the rest of the paper, section II revisits recent research, section III describes our proposed AS-Overlays design flow, section 2 discusses RapidWright features and defines data flow application in the context of this work, section V details kernels mining within applications, section VI discusses systematic hardware generation, section VII presents experimental observations, and section VIII concludes the paper.

Ii Related Work

Published work in coarse-grained reconfigurable architectures and FPGA overlays such as [23, 13, 2] are essentially dataflow machines, usually consisting of small arithmetic and logic units, registers, all of which are immersed in an switch-based interconnect structure. The processors are homogeneous and programmable, and not tailored for specific applications. Overlays such as Hoplite [13], FLexiTASK [20], and Quattor [21] have dedicated optimization, mostly focusing on the interconnect and communication infrastructure. The Hoplite-DSP [4] is the closest to the approach proposed in this work in that it leverages DSP blocks on the FPGA for dedicated implementation. However, Hoplite-DSP is still a generic architecture with homogeneous processing units.

Several research in the literature have discussed solutions for automating the generation of hardware accelerators on FPGAs. Ma et al. [19] proposed a flow relying on pre-synthesized functions for runtime generation of FPGA accelerators. It nevertheless requires mastering a specific DSL and is not optimized for performance. Ishebabi et al. [12]

presented methods for automatic synthesis targeting arrays of multiprocessors on chip using exact formulations such as integer linear programming of answer set programming. However, the search for kernel is done using profiling. In the same line of idea, Koeplinger et al.

[15] present a framework for automatic generation of efficient application specific FPGA accelerators. Parallel pattern inputs aim to raise the level of abstraction of programmers in addition to providing purposeful information to the compiler. Their approach nevertheless relies on parallel inputs which do not necessarily reflect how developers would typically implement applications. Other tools such as LegUp [3] and Vivado HLS [26] allow designers to write code in HLPL and then compile to a register transfer level (RTL) design specification. Though Vivado HLS can deliver competitive quality of results (QoR) compared to manual RTL [6], it still incurs design efforts and long compilation time. LegUp provides a built-in profiler to identify computation intensive code regions for acceleration. Applications are then modified to run partially on MIPS CPU and hardware accelerator on FPGA. Runtime communication between CPU and accelerator coupled with potential cache coherency issues might limit performances achievable by the platform. The work of Cong et al. [5] is similar to ours in that they applied graph-based techniques to identify frequent patterns by analyzing graph edit distances. That work nevertheless differs from ours as patterns are detected for optimized FPGA resource sharing during the binding in behavioral synthesis.

In contrast to the aforementioned research, our AS-Overlays generation flow leverages advanced graph mining techniques to find kernels in applications and builds a library of accelerators that can be combined into an FPGA overlay to improve performance, productivity, and power consumption.

Iii Design Flow

The major limitation of FPGA overlays resides in that they most often feature more resources than what is actually needed, resulting in increased power consumption and performance lost. They are regularly made of several processing elements (PEs) and interconnect. PEs generally contain some registers and a functional unit capable of executing a set of functions, resulting in architectures not optimized for specific tasks. In Figure 1, we propose a Design Flow to build FPGA overlays that can compete with bare metal implementations. Few steps are necessary to produce an AS-Overlay:

  1. Specify the application with a HLPL.

  2. Inspect the bytecode or intermediate representation (IR) of the application at compile time to extract compute-intensive code sections that we identify as kernels.

  3. Optimize kernels to remove unneeded instructions.

  4. Manually pre-synthesize basic operations from the IR instruction set using vendor tools: this step is done exactly once, and the synthesized netlists can be reuse in several other applications.

  5. Combine the pre-synthesized basic operations according to kernel descriptions to generate hardware circuits.

Figure 1 also illustrates the overall Architecture supporting the proposed design flow. After the identification of kernels, the LLVM Pass generates a new source code equivalent to the input program, in which kernels’ instructions are replaced by hardware calls.

Fig. 1: Design Flow, Utilization Flow and Architecture

We use LLVM to search for kernels as it allows transparent optimization on applications written in arbitrary HLPL. Each application is parsed with an LLVM Frontend to output an IR. The produced LLVM IR is then converted into data flow graphs (”Overlay Abstract Representation” in Figure 1) for analysis, and kernels are identified. RapidWright is further leveraged to automatically generate kernel netlists by assembling as in a puzzle, a set of pre-synthesized LLVM IR operations. We use RapidWright because it is designed to quickly stitch together pre-implemented modules with minimal QoR loss. Finally, hardware kernels are embedded within PEs of an arbitrary overlay architecture (”Overlay Concrete Representation” in Figure 1), and Vivado is used to place and route the AS-Overlay. In the Utilization flow, a new optimized C/C++ code alongside the mapping library can now be compiled and run on a SoC. The mapping library is made of a set of functions handling data copy to/from the FPGA, removing the need for hardware expertise. In the rest of the paper, the focus is mainly set on kernel mining and hardware generation.

Iv Preliminary

RapidWright [17]

is an open source Java framework from Xilinx Research Labs that provides a bridge to Vivado back-end at different compilation stages through design checkpoint (DCP) files. By making available logical/physical netlist data structures and functions, it enables custom netlist manipulation and direct access to logic and routing resources such as look-up tables (LUT), flip-flops (FF) and programmable interconnect points from a Java API (see Figure


Fig. 2: RapidWright Flow vs Vivado Flow.

As opposed to vendor tools that are closed source, we believe the full access to RapidWright internal features and design resources makes it suitable for design flow exploration and the implementation of targeted FPGA solutions.

Data Flow Applications: the data flow concept refers to a way of looking at the execution flow of instructions in an application. It gives a perspective on operations and their interactions. For modeling purposes, we use the data flow graph (DFG) representation in which nodes represent machine operations, internal edges, data flowing between pairs of operations, and external edges, connections with inputs/outputs. We study these applications because they represent the base of calculation in several computing domains such as image and video processing, or deep learning. In the subsequent sections, we will study how kernels are identified and corresponding hardware is generated. In the rest of the paper, we will refer to ”data flow applications” as ”applications”.

V Kernel Mining

This section discusses how compute-intensive code portions are extracted from the IR of applications. We begin with background definitions necessary to understand terminologies, then we detail data structures and the kernel mining algorithm.

V-a Background Definitions

Definition 1.

A Control Data Flow Graph (CDFG) is a directed graph , where represents the set of vertices, the set of edges, and the set of labels, with being the labeling of vertices and edges. In the context of this work, a graph is a CDFG defined at a basic block (BB) level.

Definition 2.

An isomorphism between a graph G and a graph H is a bijective function: . It measures the similarity between G and H, and therefore prevents recording multiple instances of the same graph when a function f can be found. A subgraph isomorphism from G to H is an isomorphism from G to subgraph H.

Definition 3.

Let be a set of CDFGs. The support is the minimum frequency of appearance of a subgraph in .

Definition 4.

Given a set of CDFGs and a threshold (in this case the minimum support value), the kernel mining consists in finding graphs in , such that . Kernels will then represent the set of graphs .

Definition 5.

A Depth First Search (DFS) traversal of a graph defines the order in which its edges are visited: that sequence of edges represent the DFS code of the graph.

V-B Kernel Mining

Prior to kernel mining, we build a DFG with properties that best capture the input program. Each vertex is labeled with the operation of the instruction it represents, while edges display the order of precedence between operations. The mining algorithm is summarized in Algorithm 1.

Input : LLVM IR, minSup
Output : C++ source file
1 Function kernelMining (GS, Fsubgraphs, minSup):
2          sort labels of the vertices and edges in GS by frequency (using DFS code);
3          remove infrequent vertices and edges;
4          relabel the remaining vertices and edges (descending);
5          S1 := all frequent 1-edge graphs;
6          sort S1 in DFS lexicographic order;
7          Fsubgraphs := S1;
8          foreach edge e in S1 do
9                   init g with e;
10                   set g.DS={ GS, e E(h)} ;
11                   subgraphMining(GS,Fsubgraphs,g);
12                   GS := GS - e;
13                   if ( minSup) then
14                            break;
16                   end if
18          end foreach
20 return
21 Function codeInjection (Fsubgraphs):
22          setLines lines;
23          setVariables var;
24          set files;
25          foreach Graph graph in Fsubgraphs do
26                   foreach Instruction inst in graph do
27                            lines := getInstructionLine(inst);
28                            var := getVariables(inst);
29                            files := getfileName(inst);
31                   end foreach
32                  injectFunctions();
34          end foreach
36 return
/* Main function defined as a ModulePass */
37 Function generateDFG (Module M):
38          setGraphs GS;
39          setGraphs Fsubgraphs;
40          foreach Function FF in M do
41                   foreach BasicBlock BB in FF do
42                            Graph BBgraph;
43                            foreach Instruction II in bb do
44                                     BBgraph := II;
45                            end foreach
46                           GS := BBgraph;
47                   end foreach
48                  kernelMining (GS, Fsubgraphs, minSup);
49                   codeInjection (Fsubgraphs);
51          end foreach
53 return
Algorithm 1 LLVM Pass for Kernel Mining

In the control-flow kernel mining, a CDFG normally consists of several DFGs. Nevertheless, the labeling is done such that all the DFGs belonging to a BB remain in the same hierarchy (line 24-26), even if no common edge exists between them. Overall, the kernel mining follows several steps among which:

  1. Generate candidates using a DFS-based approach (line 2),

  2. Prune the candidates to remove infrequent vertices and edges (line 3 to 7),

  3. Evaluate the support value to decide whether a candidate is a kernel or not (line 8 to 16).

However, the isomorphism search during candidate pruning is known to be NP-complete [7], and several subgraphs isomorphism techniques as the ones described in [14] and [31] lead to high computation overhead. One way to mitigate that high overhead consists in computing the canonical form of graphs [24]: if the canonical form of two graphs is identical, the graphs are considered isomorphic. We therefore construct canonical form of DFS codes as in [30], which the minimum code that can be derived from a graph . Specifically, the strategy consists in:

  1. [(1)]

  2. Building frequent subgraphs bottom-up, using DFS code as regularized representation.

  3. Eliminating redundancies via minimal canonical DFS code based on lexicographic ordering.

Given that there is a considerable amount of DFS codes, we build a DFS code tree using a lexicographic ordering [30] between DFS codes as follows: A DFS code is parent of DFS code and is child of if: (1) each node represents DFS code. (2) The relationships between parents and children conform to the lexicographic ordering. (3) The siblings are consistent with DFS lexicographic order.

The DFS code tree structure is particularly useful in the kernel mining as it allows to make the following two assumptions: (1) If a DFS code is frequent, then every ancestor of is frequent. (2) If is not frequent then every descendant of is not frequent (line 11).

Finally, the custom C/C++ code is generated by replacing kernels’ instructions by high-level functions for hardware acceleration. Original names of the variables and their location are retrieved by inspecting the debug metadata (line 23 - 27) attached to each instruction in the IR. LLVM uses the DWARF [9] standardized debugging data format like several other compilers and debuggers to support source layer debugging. The metadata provides the relationships between the generated code and the source code of the original program.

Fig. 3: Graph Pruning

Once kernels are actually extracted from an application, there is still a need to undergo a final graph pruning. It consists in selecting operations that can actually be mapped on FPGAs. This pruning follows two main stages: (1) Removing Load/Store: codes initially being written for Von Neumann architectures, LLVM IR introduces a set of load and store instructions that are not needed on FPGA. We therefore only consider operations different from such instructions as illustrated in Figure 3.

(2) Avoiding Conversions: LLVM often inserts casting operations like zext that are not qualified for FPGA acceleration.

We further study dependencies between basic blocks, searching for additional optimization possibilities. We mainly seek to merge kernels displaying dependencies to save resources and reduce the global latency. As example, Figure 5 pictures three kernels spread across basic blocks BB0, BB1, and BB2 (the kernels are encircled with dotted lines).

Fig. 4: Kernels with Dependencies
Fig. 5: Merged Kernel

Because of the data dependencies between BB0-BB1 (content of the register b0) and BB0-BB2 (content of register a0), we generate the more complex kernel illustrated in Figure 5. We insert demuxes for conditional branches and FFs for temporary storage. Instead of having the kernels deployed over three PEs, we can then use a single processing core.

The following section describes how a placed and routed AS-Overlay is obtained from LLVM kernels.

Vi Hardware Generation of Kernels

Initially, the function implemented by a PE in the overlay layout is defined as a black-box. We leverage the pre-implemented design flow of RapidWright [17] to produce netlists from kernels previously identified with LLVM. The first step consists in synthesizing basic operations from LLVM IR out-of-context (OOC) with Vivado to create a library of Modules. Modules are built OOC to ensure that I/O buffers and global clocks are not inserted into kernel netlists [27]. This stage implies a manual implementation (through HDL or HLS) of operations to combine into kernels by an engineer, with the advantage that this step is done once. In the following stage, an application built on the RapidWright API stitches pre-implemented modules following the LLVM kernel DFG descriptions. The hardware kernels thus generated are returned as design checkpoint files and define the functions to be executed in PEs. Finally, the RapidWright application opens the netlist of the overlay (EDIF or DCP files), browses through the design cells, and reads-in kernel DCPs into PE black-boxes.

Fig. 6: Example of Overlay Generation.

Figure 6 illustrates AS-Overlays generation steps. From an application, the LLVM tool identifies kernels and generates corresponding DFGs. Each DFG is dumped into a list of vertices and edges. Vertices start with the character v, and are characterized by an identifier and a label denoting the operation. Edges are introduced by a letter e, and are defined with an identifier, the two vertices it connects, and a letter (L for left, and R for right) specifying if the edge gets into the sink vertex through the left or right input. In the next step, a DCP/EDIF containing the layout of the overlay (with the PE functions still being black-boxes) is opened within the RapidWright application, and the generated hardware kernels are successively read-in in PEs, and a new DCP is created for the overlay, this time with each PE implementing a specific function derived from the LLVM kernel mining. Finally, placement and routing are run with Vivado, and a bitstream of the overlay is produced for FPGA deployment. As opposed to the traditional RapidWright pre-implemented flow, which implies synthesizing, placing, and routing modules OOC [17], basic operations from LLVM IR are only synthesized. Undeniably, for accurate Vivado post-routing timing analysis, partition pin constraints must be defined on input ports of OOC modules, with the consequence of attaching pre-implemented modules to specific FPGA regions [27]. Since kernel netlists are automatically generated with RapidWright from DFGs, it is not possible to know in advance what FPGA resources will be used and how they will actually be assembled into hardware kernels. We therefore limit the pre-implementation of LLVM IR operations to the synthesis stage.

Vi-a Datapath Regularization

To reduce overall latency and data management overhead, datapaths must be regularized. Each operation within a kernel comes with its own latency in number of clock cycles. We must therefore ensure that operands arrive at the boundary of each module at the same time to expect correct results. Figure (a)a shows an example of graph needing regularization. If we assume that addition and multiplication respectively require 1 and 6 clock cycles, the right operand of vertex ”mul 1” and the left operand of vertex ”sub 2” must be delayed by 1 and 7 clock cycles. This task is done by the RapidWright application which inserts FFs on the path as illustrated in Figure (b)b. Inserting FFs do not increase overall latency as the number of FFs is the cumulative latency of operations on the datapath.

(a) Non-Regularized Datapath
(b) Regularized Datapath
Fig. 7: Datapath Regularization

Vi-B Processing Elements

We do not discuss interconnection topology between PEs as the focus is not on obtaining improved/flexible communication; rather, we emphasize architectural features supporting the automatic generation of hardware kernels.

Fig. 8: PE Architecture

In addition, the proposed AS-Overlay generation flow is designed and well suited for any interconnect topology (mesh, torus, mixed topology, etc [1]) as only the PE processing core will be changed. We therefore look at the minimum architecture set-up that should be embedded in each PE. Figure 8 illustrates the architecture of PEs. To handle kernels of multiple inputs/outputs as shown in Table IV, the I/O buses have parameterizable sizes and are split into 32-bits channels. Inputs and outputs are temporarily stored in I/O queues to avoid data lost in case of multiple clock domains crossing. The Control module is configured with the latency of the kernel programmed in the PE, allowing to orchestrate when fetching data from input queues, and when writing results into output queues. The Black-box is the core of the PE as it implements one or multiple kernels derived from LLVM code inspection.

Matrix Mult Outer Product Robert Cross Smoothing
88 0.39 1.71 0.58 88 0.043 0.046 0.043 1616 0.19 0.58 0.19 1616 3.48 4.35 4.15
1616 3.05 13.65 4.57 1616 0.113 0.116 0.113 3232 0.756 2.28 0.756 3232 13.68 17.1 16.3
3232 24.29 109.32 36.42 3232 0.396 0.400 0.396 6464 3.03 9.12 3.03 6464 54.72 68.40 65.36
6464 194.20 873.84 291.29 6464 1.53 1.54 1.53 128128 12.13 36.42 12.13 128128 218.52 273.15 261.01
TABLE I: Execution Time Comparison on 33 PEs in s

Vii Experimental Observations

Vii-a Evaluation Platform and Setup

For evaluation purposes, designs are implemented on a Xilinx Kintex UltraScale+ FPGA (xcku5p-ffvd900-2-i). Hardware generation is conducted with Vivado HLx Editions v2018.2, and RapidWright v2018.2.5-beta allows assembling hardware kernels. We ran Vivado, RapidWright, and the LLVM kernel mining on a computer equipped with an Intel Corei3-8130U CPU@2.20GHz4 and 8Gb of RAM. We study image processing and matrix-based applications. Though kernels from applications can altogether be deployed on the AS-Overlay, we run applications individually with the purpose of assessing achievable performances, in particular: (1) global latency, (2) Fmax and productivity, (3) resource utilization, and (4) power consumption, when comparing AS-overlays to regular overlays and bare metal implementations. For each application, we design a PEs overlay with the three flavors: (i) Bare Metal: functions are implemented in HDL, embedded in PEs for multitasking, and compiled with Vivado. (ii) Regular Overlay: Each PE implements an ALU offering a dozen arithmetic and logic operations. The regular overlay is implemented in HDL and also compiled with Vivado. (iii) AS-Overlay: Kernels are implemented into PEs using the design flow described in Figure 1.

Vii-B Evaluation Results

Data are sent to the FPGA through a set of custom C functions as mentioned in the utilization flow of Figure 1. Execution times recorded in Table I come from placing the bare metal, regular overlay, and AS-Overlay implementations of each application alongside a MicroBlaze CPU with a MHz global clock.

Fig. 9: Execution Improvement

It shows that AS-Overlays can effectively compete with bare metal implementations in several test cases. The bare metal nevertheless outperforms AS-Overlays on the image smoothing and matrix multiplication because of the additional clock cycles introduced by the RapidWright application. In fact, to ensure timing closure when integrating kernels within the AS-Overlay fixed sections (PE architecture + interconnect), FFs are injected on the datapath after each operation. Table I also shows that AS-Overlays compute faster that regular overlays when clocked with identical frequency. Figure 9 actually presents about improved execution time when averaging all the execution times of the tested applications. This performance gain is amplified by the Fmax study as higher clocked circuits can significantly reduce execution times. To carry out Fmax and productivity studies summarized in Table III, for each application, we introduced a Phase-Locked Loop generating a MHz (requesting higher frequencies like MHz or MHz returned negative slacks too high in some of the bare metal implementations) clock on each flavor of overlay. The idea was to observe the maximum frequency and how long would the compilation take. As first observation, AS-Overlays can achieve up to improved Fmax compared to regular overlays on tested applications (the AS-Overlay tops at MHz while the regular overlay caps at MHz). This is caused by general-purpose ALUs of regular overlays that contain several muxes introducing substantial delays on datapaths. On the other hand, bare metal implementations achieved higher Fmax compared to AS-Overlays on outer product and Robert cross filter. It comes down to an observation made in [17]: vendor tools such as Vivado often produce high performance results for small modules of a design. In this case of figure, outer product and Robert cross are respectively a set of independent multiplications, and subtractions followed by comparisons, which gives to bare metal a Fmax advantage over AS-Overlays. That advantage is nevertheless lost on more complex functions such as image smoothing (the AS-Overlay achieved a higher Fmax), which computes the average of adjacent pixels, highlighting the benefits of using the RapidWright pre-implemented flow as smaller modules can be pre-implemented to achieve maximum frequency, and later be assembled with minimal QoR loss. Reported compilation times show that Kernel netlist generation and loading within PE black-boxes with the RapidWright application, outperforms up to Vivado synthesis both in Regular overlays and bare metal implementations. Table III finally demonstrates that the proposed AS-overlay generation flow can provide up to productivity improvement over regular overlays on tested benchmarks.

(a) Number of Look-Up Tables
(b) Number of Flip-Flops
(c) Number of DSP Blocks
(d) Number of Block RAMs
Fig. 10: FPGA Resource Utilization

One way to load hardware kernels into PE black-boxes could have been to use the Vivado read_checkpoint TCL command in place of the RapidWright API like we did in the proposed approach. Vivado loading nevertheless outcomes in higher time and memory overhead as shown in Table II: Vivado in TCL mode or with the graphical user interface (GUI) incurs higher time penalty and RAM utilization than doing the same operation from RapidWright. While the RapidWright application uses few hundreds of Megabytes of the RAM on the testing computer, and loads kernels in about seconds, Vivado launched both with the GUI and the command line interface (CLI), uses about a Gigabyte of RAM and requires up to seconds to complete loading hardware kernels. This observation justifies why we only use Vivado for the placement and routing.

2.16 s
215.2 MB
2.07 s
129.6 MB
2.13 s
141.2 MB
2.05 s
142.2 MB
Vivado Loading


5.62 s
1.4 GB
5.10 s
1.4 GB
5.32 s
1.4 GB
6.19 s
1.5 GB


2.19 s
925.3 MB
2.12 s
924.2 MB
2.18 s
923.1 MB
2.94 s
936.1 MB
TABLE II: Kernel Loading Time & Memory Usage

Figure 10 summarizes the utilization of FPGA resources only for the matrix multiplication (because of page limitation, it was not possible to present the same study for each of the tested applications) as an illustration of how the fabric is progressively occupied as the number of PEs is scaled up. In general, the total amount of resources used by AS-Overlays is close to that of the bare metal, and both far below regular overlays. Figure (c)c nevertheless displays the same number of DSPs (the purple, red, and yellow lines are superimposed, so only the yellow line is visible) simply because PEs on the three platforms implement only one multiplier using 4 DSP48E2s.

Fig. 11: CLB Spreading
Fig. 12: Power Consumption

Pre-implementing basic functions from LLVM IR also have the potentiality of reducing resource utilization as illustrated in Figure (d)d: Vivado optimizes individual hardware implementation from LLVM IR without BRAM insertion while adding such resources when compiling bare metal and regular overlays, which translates into a higher power consumption after PEs (see Figure 12): with PEs, the bare metal uses mW while the AS-Overlay consumes mW, about less power. Figure (b)b reports a higher utilization of FFs in AS-Overlays as opposed to bare metal. This is due to FFs insertion during datapath regularization in addition to input, config and output registers from the PE architecture (check Figure 8). While injecting FFs on the datapath as explained in section VI-A do not incur delays in kernel execution, it obviously increases the number of FFs used depending of the structure of kernel DFGs. This is nevertheless a price to pay to ensure the correctness of results produced by hardware kernels.

33 Overlay Generation Flow Applications
Bare Metal Synthesis 26 17 35 35
Optimization 5 2 6 34
Placement 28 23 44 85
Routing 68 59 55 1064
Total (Seconds) 127 101 140 1218
Fmax (MHz) 365 488 348 231

Vivado Flow
Total (Seconds) 1635 (27 minutes 15 seconds)
Fmax (MHz) 304

Kernel Gen.
3.89 3.48 3.55 4.34
Kernel Load.
2.16 2.07 2.13 2.05
Optimization 5 4 3 33
Placement 46 19 24 83
Routing 65 54 117 646
Total (Seconds) 122.05 82.55 149.68 768.39
Fmax (MHz) 435 447 318 308

TABLE III: Productivity Analysis & Maximum Frequency

We also assess the overall use of the FPGA layout. Without defining pblock constraints on the designs, we study how Vivado spreads circuits across Configurable Logic Blocks (CLBs) on the FPGA. This provides a good measurement of how much space remains available on the fabric (see Figure 12). For PEs, the regular overlay is spread over CLBs, about % of available resources on the Kintex UltraScale+ [28], making it impossible to load any other design on the chip. On the other hand, the bare metal and AS-Overlay respectively use % and % of available CLBs, leaving enough room to fit the domain specific implementation alongside other design modules on a single chip. Table IV quantifies the maximum number of inputs and outputs of kernel identified on used benchmarks, and the overhead associated with the kernel mining. Adding the pass for kernel detection within LLVM only incurs additional compilation time in the magnitude of milliseconds. The smoothing recorded the highest number of inputs. With each datapath over -bits, handling -bits is not an issue on modern FPGAs.

Kernel Mining
Matrix Mult
3 1 4.12s 4.59s (1.1)
Outer Product
2 1 0.048s 0.12s (2.5)
Robert Cross
4 1 0.15s 0.30s (2)
10 1 0.16s 0.26s (1.6)
TABLE IV: Kernel I/O & Compilation Time Comparison

Ma et al. [19] reported a productivity improvement between and , which unfortunately only accounts synthesis (no details are provided on placement and routing time). Further, they did not discuss data size. Finally, they used benchmarks, a Vivado version, and an FPGA different from ours, with no information on the characteristics of the machine used for compilation. Similar observations can be made on other works from section II. Overall, establishing a fair comparison of results with previous work is particularly challenging because of the impossibility of reproducing identical experimental environments.

Viii Conclusion

In this paper, we presented an approach aiming the automatic generation of Application-Specific FPGA Overlays for data flow applications capable of providing bare metal performances. The approach extracts kernels from applications at compile time, and automatically builds accelerators tailored for the application needs. Experimental evaluations demonstrated the viability of our approach with significant productivity improvement, power consumption reduction, and lower execution time over regular FPGA overlays. Future work will investigate the replicability feature of RapidWright coupled with LLVM code instrumentation to build more efficient FPGA accelerators.


  • [1] T. Bjerregaard and S. Mahadevan (2006) A survey of research and practices of network-on-chip. ACM Computing Surveys (CSUR) 38 (1), pp. 1. Cited by: §VI-B.
  • [2] A. Brant and G. G. Lemieux (2012) ZUMA: an open fpga overlay architecture. In Field-Programmable Custom Computing Machines (FCCM), 2012 IEEE 20th Annual International Symposium on, pp. 93–96. Cited by: §I, §II.
  • [3] A. Canis, J. Choi, M. Aldham, V. Zhang, A. Kammoona, T. Czajkowski, S. D. Brown, and J. H. Anderson (2013) LegUp: an open-source high-level synthesis tool for fpga-based processor/accelerator systems. ACM Transactions on Embedded Computing Systems (TECS) 13 (2), pp. 24. Cited by: §II.
  • [4] K. H. B. Chethan and N. Kapre (2016-08) Hoplite-dsp: harnessing the xilinx dsp48 multiplexers to efficiently support nocs on fpgas. In 2016 26th International Conference on Field Programmable Logic and Applications (FPL), Vol. , pp. 1–10. External Links: Document, ISSN 1946-1488 Cited by: §II.
  • [5] J. Cong and W. Jiang (2008) Pattern-based behavior synthesis for fpga resource reduction. In Proceedings of the 16th international ACM/SIGDA symposium on Field programmable gate arrays, pp. 107–116. Cited by: §II.
  • [6] J. Cong, B. Liu, S. Neuendorffer, J. Noguera, K. Vissers, and Z. Zhang (2011) High-level synthesis for fpgas: from prototyping to deployment. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 30 (4), pp. 473–491. Cited by: §II.
  • [7] L. P. Cordella, P. Foggia, C. Sansone, and M. Vento (2004) A (sub) graph isomorphism algorithm for matching large graphs. IEEE transactions on pattern analysis and machine intelligence 26 (10), pp. 1367–1372. Cited by: §V-B.
  • [8] T. S. Czajkowski, U. Aydonat, D. Denisenko, J. Freeman, M. Kinsner, D. Neto, J. Wong, P. Yiannacouras, and D. P. Singh (2012) From opencl to high-performance hardware on fpgas. In 22nd international conference on field programmable logic and applications (FPL), pp. 531–534. Cited by: §I.
  • [9] Dwarf home. External Links: Link Cited by: §V-B.
  • [10] R. Hartenstein (2001) Coarse grain reconfigurable architecture (embedded tutorial). In Proceedings of the 2001 Asia and South Pacific Design Automation Conference, pp. 564–570. Cited by: §I.
  • [11] Intel (2018) Intel arria 10 product table. Note: Cited by: §I.
  • [12] H. Ishebabi and C. Bobda (2009) Automated architecture synthesis for parallel programs on fpga multiprocessor systems. Microprocessors and Microsystems 33 (1), pp. 63 – 71. Note: Selected Papers from ReCoSoC 2007 (Reconfigurable Communication-centric Systems-on-Chip) External Links: ISSN 0141-9331, Document, Link Cited by: §II.
  • [13] N. Kapre and J. Gray (2015-Sep.) Hoplite: building austere overlay nocs for fpgas. In 2015 25th International Conference on Field Programmable Logic and Applications (FPL), Vol. , pp. 1–8. External Links: Document, ISSN 1946-147X Cited by: §II.
  • [14] N. S. Ketkar, L. B. Holder, and D. J. Cook (2005) Subdue: compression-based frequent pattern discovery in graph data. In Proceedings of the 1st international workshop on open source data mining: frequent pattern mining implementations, pp. 71–76. Cited by: §V-B.
  • [15] D. Koeplinger, R. Prabhakar, Y. Zhang, C. Delimitrou, C. Kozyrakis, and K. Olukotun (2016) Automatic generation of efficient accelerators for reconfigurable hardware. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 115–127. Cited by: §II.
  • [16] C. Lattner and V. Adve (2004) LLVM: a compilation framework for lifelong program analysis & transformation. In Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, pp. 75. Cited by: item 2.
  • [17] C. Lavin and A. Kaviani (2018) Rapidwright: enabling custom crafted implementations for fpgas. In 2018 IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp. 133–140. Cited by: item 3, §IV, §VI, §VI, §VII-B.
  • [18] X. Li, A. Jain, D. Maskell, and S. A. Fahmy (2016) An area-efficient fpga overlay using dsp block based time-multiplexed functional units. arXiv preprint arXiv:1606.06460. Cited by: §I.
  • [19] S. Ma, Z. Aklah, and D. Andrews (2015) A run time interpretation approach for creating custom accelerators. In Field Programmable Logic and Applications (FPL), 2015 25th International Conference on, pp. 1–4. Cited by: §II, §VII-B.
  • [20] J. Mandebi Mbongue, D. Tchuinkou Kwadjo, and C. Bobda (2018) FLexiTASK: a flexible fpga overlay for efficient multitasking. In Proceedings of the 2018 on Great Lakes Symposium on VLSI, pp. 483–486. Cited by: §I, §II.
  • [21] M. Metzner, J. A. Lizarraga, and C. Bobda (2015) Architecture virtualization for run-time hardware multithreading on field programmable gate arrays. In Applied Reconfigurable Computing, K. Sano, D. Soudris, M. Hübner, and P. C. Diniz (Eds.), Cham, pp. 167–178. Cited by: §I, §II.
  • [22] Microsoft (2018) Project catapult. Note: Cited by: §I.
  • [23] E. Mirsky, A. DeHon, et al. (1996) MATRIX: a reconfigurable computing architecture with configurable instruction distribution and deployable resources.. In FCCM, Vol. 96, pp. 17–19. Cited by: §II.
  • [24] T. Miyazaki (1997) The complexity of mckay’s canonical labeling algorithm. In Groups and Computation II, Vol. 28, pp. 239–256. Cited by: §V-B.
  • [25] T. Sims (2018) Xilinx launches the world’s fastest data center and ai accelerator cards. Note: Cited by: §I.
  • [26] F. Winterstein, S. Bayliss, and G. A. Constantinides (2013) High-level synthesis of dynamic data structures: a case study using vivado hls. In 2013 International Conference on Field-Programmable Technology (FPT), pp. 362–365. Cited by: §I, §II.
  • [27] Xilinx (2017) Vivado design suite user guide hierarchical design. Note: Cited by: §VI, §VI.
  • [28] Xilinx (2018) UltraScale+ fpgas product tables and product selection guide. Note: Cited by: §VII-B.
  • [29] Xilinx (2019) Alveo u250 data center accelerator card. Note: Cited by: §I.
  • [30] X. Yan and J. Han (2002) Gspan: graph-based substructure pattern mining. In Data Mining, 2002. ICDM 2003. Proceedings. 2002 IEEE International Conference on, pp. 721–724. Cited by: §V-B, §V-B.
  • [31] M. J. Zaki (2005) Efficiently mining frequent embedded unordered trees. Fundamenta Informaticae 66 (1-2), pp. 33–52. Cited by: §V-B.