Why ESP? ESP is an open-source research platform for heterogeneous system-on-chip (SoC) design and programming (Columbia SLD Group, 2019). ESP is the result of nine years of research and teaching at Columbia University (Carloni, 2016; Carloni et al., 2019). Our research was and is motivated by the consideration that Information Technology has entered the age of heterogeneous computing. From embedded devices at the edge of the cloud to data center blades at the core of the cloud, specialized hardware accelerators are increasingly employed to achieve energy-efficient performance (Borkar and Chen, 2011; Horowitz, 2014; Caulfield et al., 2016)
. Across a variety of application domains, such as mobile electronics, automotive, natural-language processing, graph analytics and more, computing systems rely on highly heterogeneous SoC architectures. These architectures combine general-purpose processors with a variety of accelerators specialized for tasks like image processing, speech recognition, radio communication and graphics(Dally et al., 2020)
as well as special-purpose processor cores with custom instruction sets, graphics processing units, and tensor manipulation units(Jouppi et al., 2018). The shift of the silicon industry from homogeneous multicore processors to heterogeneous SoCs is particularly noticeable if one looks at the portion of chip area dedicated to accelerators in subsequent generations of state-of-the-art chips for smartphones (Shao et al., 2015), or at the amount of diverse processing elements in chips for autonomous driving (Chishiro et al., 2019).
ESP Vision. ESP is a platform, i.e., the combination of an architecture and a methodology (Carloni, 2016). The methodology embodies a set of agile SoC design and integration flows, as shown in Figure 1. The ESP vision is to allow application domain experts to design SoCs. Currently, ESP allows SoC architects to rapidly implement FPGA-based prototypes of complex SoCs. The ESP scalable architecture and its flexible methodology enable a seamless integration of third-party open-source hardware (OSH) components (e.g., the Ariane RISC-V core (2; F. Zaruba and L. Benini (2019))
or the NVIDIA Deep-Learning Accelerator(41)
). SoC architects can instantiate also accelerators that are developed with one of the many design flows and languages supported by ESP. The list, which continues to grow, currently includes: C/C++ with Xilinx Vivado HLS and Mentor Catapult HLS; SystemC with Cadence Stratus HLS; Keras TensorFlow, PyTorch and ONNX with hls4ml; and Chisel, SystemVerilog, and VHDL for register-transfer level (RTL) design. Hence, accelerator designers can choose the abstraction level and specification language that are most suitable for their coding skills and the target computation kernels. These design flows enable the creation of a rich library of components ready to be instanced into the ESP tile-based architecture with the help of the SoC integration flow.
Thanks to the automatic generation of device drivers from pre-designed templates, the ESP methodology simplifies the invocation of accelerators from user-level applications executing on top of Linux (Giri et al., 2020; Mantovani et al., 2016). Through the automatic generation of a network-on-chip (NoC) from a parameterized model, the ESP architecture can scale to accommodate many processors, tens of accelerators, and a distributed memory hierarchy (Giri et al., 2018). A set of platform services provides pre-validated solutions to access or manage SoC resources, including accelerators configuration (Mantovani et al., 2016), memory management (Mantovani et al., 2016), and dynamic voltage frequency scaling (DVFS) (Mantovani et al., 2016), among others. ESP comes with a GUI that guides the designers through the interactive choice and placement of the tiles in the SoC and it has push-button capabilities for rapid prototyping of the SoC on FPGA.
Open-Source Hardware. OSH holds the promise of boosting hardware development and creating new opportunities for academia and entrepreneurship (Gupta et al., 2017). In recent years, no other project has contributed to the growth of the OSH movement more than RISC-V (Asanovic and Patterson, 2014). To date, the majority of OSH efforts have focused on the development of processor cores that implement the RISC-V ISA and small-scale SoCs that connect these cores with tightly-coupled functional units and coprocessors, typically through bus-based interconnects. Meanwhile, there have been less efforts in developing solutions for large-scale SoCs that combine RISC-V cores with many loosely-coupled components, such as coarse-grained accelerators (Cota et al., 2015), interconnected with a NoC. With this gap in mind, we have made an open-source release of ESP to provide the OSH community with a platform for heterogeneous SoC design and prototyping (Columbia SLD Group, 2019).
2. The ESP Architecture
The ESP architecture is structured as a heterogeneous tile grid. For a given application domain, the architect decides the structure of the SoC by determining the number and mix of tiles. For example, Figure 2 shows a 9-tile SoC organized in a matrix. There are four types of tiles: processor tile, accelerator tile, memory tile for the communication with main memory, and auxiliary tile for peripherals, like UART and Ethernet, or system utilities, like the interrupt controller and the timer. To support a high degree of scalability, the ESP tiles are connected by a multiplane NoC (Yoon et al., 2013).
The content of each tile is encapsulated into a modular socket (aka shell), which interfaces the tile to the NoC and implements the platform services. The socket-based approach, which decouples the design of a tile from the design of the rest of the system, is one of the key elements of the agile ESP SoC design flow. It highly simplifies the design effort of each tile by taking care of all the system integration aspects, and it facilitates the reuse of intellectual property (IP) blocks. For instance, the ESP accelerator socket implements services for DMA, cache coherence, performance monitors, and distributed interrupt requests.
At design time, it is possible to choose the set of services to instantiate in each tile. At runtime, the services can be enabled and many of them offer reconfigurability options, e.g., dynamic reconfiguration of the cache-coherence model (Giri et al., 2019).
The ESP architecture implements a distributed system that is inherently scalable, modular and heterogeneous, where processors and accelerators are given the same importance in the SoC. Differently from other OSH platforms, ESP proposes a system-centric view, as opposed to a processor-centric view.
2.1. System Interconnect
Processing elements act as transaction masters that access peripherals and slave devices distributed across remote tiles. All remote communication is supported by a NoC, which is a transparent communication layer. Processors and accelerators, in fact, operate as if all remote components were connected to their local bus controller in the ESP sockets. The sockets include standard bus ports, bridges, interface adapters and proxy components that provide a complete decoupling from the network interface. Figure 3 shows in detail the modular architecture of the ESP interconnect for the case of a six-plane NoC. Every platform service is implemented by a pair of proxy components. One proxy translates requests from bus masters, such as processors and accelerators, into transactions for one of the NoC planes. The other proxy forwards requests from the NoC planes to the target slave device, such as last-level cache (LLC) partitions, or Ethernet. For each proxy, there is a corresponding buffer queue, located between the tile port of the NoC routers and the proxy itself. In Figure 3, the color of a queue depends on the assigned NoC plane. The number and direction of the arrows connected to the queues indicate whether packets can flow from the NoC to the tile, from the tile to the NoC, or in both directions. The arrows connect the queues to the proxies. These are labeled with the name of the services they implement and the number of the NoC plane used for receiving and sending packets, respectively.
The current implementation of the ESP NoC is a packet-switched 2D-mesh topology with look-ahead dimensional routing. Every hop takes a single clock cycle because arbitration and next-route computation are performed concurrently. Multiple physical planes allow protocol-deadlock prevention and provide sufficient bandwidth for the various message types. For example, since a distributed directory-based protocol for cache coherence requires three separate channels, planes 1, 2 and 3 in Figure 3 are assigned to request, forward, and response messages, respectively. Concurrent DMA transactions, issued by multiple accelerators and handled by various remote memory tiles, require separate request and response planes. Instead of reusing the cache-coherence planes, the addition of two new planes (4 and 6 in Figure 3) increases the overall NoC bandwidth. Finally, one last plane is reserved for short messages, including interrupt, I/O configuration, monitoring and debug.
Currently, customizing the NoC topology is not automated in the ESP SoC integration flow. System architects, however, may explore different topologies by modifying the router instances and updating the logic to generate the header flit for the NoC packets (Yoon et al., 2017).
2.2. Processor Tile
Each processor tile contains a processor core that is chosen at design time among those available: the current choice is between the RISC-V 64-bit Ariane core from ETH Zurich (2; F. Zaruba and L. Benini (2019)) and the SPARC 32-bit LEON3 core from Cobham Gaisler (Cobham Gaisler, ). Both cores are capable of running Linux and they come with their private L1 caches. The processor integration into the distributed ESP system is transparent: no ESP-specific software patches are needed to boot Linux. Each processor communicates on a local bus and is agnostic of the rest of the system. The memory interface of the LEON3 core requires a 32-bit AHB bus, whereas Ariane comes with a 64-bit AXI interface. In addition to proxies and bus adapters, the processor socket provides a unified private L2 cache of configurable size, which implements a directory-based MESI cache-coherence protocol. Processor requests directed to memory-mapped I/O registers are forwarded by the socket to the IO/IRQ NoC plane through an APB adapter. The only processor-specific component of the socket is an interrupt-level proxy, which implements the custom communication protocol between the processor and the interrupt controller and system timer in the auxiliary tile.
2.3. Memory Tile
Each memory tile contains a channel to external DRAM. The number of memory tiles can be configured at design time. Typically, it varies from one to four depending on the size of the SoC. All necessary hardware logic to support the partitioning of the addressable memory space is automatically generated and the partitioning is completely transparent to software. Each memory tile also contains a configurably-sized partition of the LLC with the corresponding directory. The LLC in ESP implements an extended MESI protocol, in combination with the private L2 cache in the processor tiles, that supports Linux with symmetric multiprocessing, as well as runtime reconfigurable coherence for accelerators (Giri et al., 2019).
2.4. Accelerator Tile
This tile contains the specialized hardware of a loosely-coupled accelerator (Cota et al., 2015). This type of accelerator executes a coarse-grained task independently from the processors while exchanging large datasets with the memory hierarchy. To be integrated in the ESP tile, illustrated on the top-left portion of Figure 3, accelerators should comply to a simple interface that includes load/store ports for latency-insensitive channels (Carloni et al., 1999; Carloni, 2015), signals to configure and start the accelerator, and an acc_done signal to notify the accelerator completion and generate an interrupt for the processors. ESP accelerators that are newly designed with one of the supported design flows automatically comply with this interface. For existing accelerators, ESP offers a third-party integration flow (Giri et al., 2020). In this case, the accelerator tile has only a subset of the proxy components because the configuration registers, DMA for memory access, and TLB for virtual memory (Mantovani et al., 2016) are replaced by standard bus adapters.
The set of platform services provided by the socket relieves the designer from the burden of “reinventing the wheel” with respect to implementing accelerator configuration through memory-mapped registers, address translation, and coherence protocols. Furthermore, the socket enables point-to-point communication (P2P) among accelerator tiles so that they can exchange data directly instead of using necessarily shared-memory communication.
Third-party accelerators can use the services to issue interrupt requests, receive configuration parameters and initiate DMA transactions. They are responsible, however, for managing shared resources, such as reserved memory regions, and for implementing their own custom hardware-software synchronization protocol.
The run-time reconfigurable coherence protocol service is particularly relevant for accelerators. In fact, there is no static coherence protocol that can necessarily serve well all invocations of a set of heterogeneous accelerators in a given SoC (Giri et al., 2018). With the non-coherent DMA model, an accelerator bypasses the cache hierarchy to exchange data directly with main memory. With the fully-coherent model, the accelerator communicates with an optional private cache placed in the accelerator socket. The ESP cache hierarchy augments a directory-based MESI protocol with support for two models where accelerators send requests directly to the LLC, without owning a private cache: the LLC-coherent DMA and the coherent DMA models. The latter keeps the accelerator requests coherent with respect to all private caches in the system, whereas the former does not. While fully-coherent and coherent DMA are fully handled in hardware by the ESP cache hierarchy, non-coherent DMA and LLC-coherent DMA demand that software acquires appropriate locks and flushes private caches before invoking accelerators. These synchronization mechanisms are implemented by the ESP device drivers, which are generated automatically when selecting any of the supported HLS flows discussed in Section 4.
2.5. Auxiliary Tile
The auxiliary tile hosts all shared peripherals in the system except from memory: the Ethernet NIC, UART, a digital video interface, a debug link to control ESP prototypes on FPGA and a monitor module that collects various performance counters and periodically forwards them through the Ethernet interface.
As shown in Figure 3, the socket of the auxiliary tile is the most complex because most platform services must be available to serve the devices hosted by this tile. The interrupt-level proxy, for instance, manages the communication between the processors and the interrupt controller. Ethernet, which requires coherent DMA to operate as a slave peripheral, enables users to remotely log into an ESP instance via SSH. The frame-buffer memory, dedicated to the video output, is connected to one proxy for memory-mapped I/O and one for non-coherent DMA transactions. These enable both processor cores and accelerators to write directly into the video frame buffer. The Ethernet debug interface (Gaisler, 2004), instead, uses the memory-mapped I/O and register access services to allow ESP users to monitor and debug the system through the ESP Link application. Symmetrically, UART, timer, interrupt controller and the bootrom are controlled by any master in the system through the counterpart proxies for memory-mapped I/O and register access. Hence, the auxiliary tile includes both pairs of proxies. These enable an additional communication path, labeled as local port shortcut in Figure 3, which connects the masters in the auxiliary tile (i.e. the Ethernet debug link) with slaves that do not share the same local bus. A similar shortcut in the processor tile allows a processor to flush its own private L2 cache and manage its local DVFS controller.
3. The ESP Software Stack
The ESP accelerator’s Application Programming Interface (API) library simplifies the invocation of accelerators from a user application, by exposing only three functions to the programmer (Giri et al., 2020). Underneath, the API invokes the accelerators with the automatically generated Linux device drivers. The API is lightweight and can be targeted from existing applications or by a compiler. For a given application, the software execution of a computationally intensive kernel can be replaced with hardware accelerators by means of a single function call (esp_run()). Figure 4 shows the case of an application with four computation kernels, two executed in software and two implemented with an accelerator. The configuration argument passed to esp_run() is a simple data structure that specifies which accelerator(s) to invoke, how to configure them, and their point-to-point dependencies, if any. By using the esp_alloc() and esp_free() functions for memory allocation, data can be truly shared between accelerators and processors, i.e., no data copies are necessary. Data are allocated in an efficient way to improve the accelerator’s access to memory without compromising the software’s performance (Mantovani et al., 2016). The ESP software stack, combined with the generation of device drivers for new custom accelerators, makes the accelerator invocation as transparent as possible for the application programmers.
4. The ESP Methodology
The ESP design methodology is flexible because it embodies different design flows, for the most part automated and supported by commercial CAD tools. In particular, recalling Figure 1, the accelerator design flow (on the left in the figure) aids the creation of an IP library, whereas the SoC flow (on the right) automates the integration of heterogeneous components into a complete SoC.
4.1. Accelerator Flow
The end goal of this flow is to add new elements to the library of accelerators that can be automatically instantiated with the SoC flow. Designers can work at different abstraction levels with various specification languages:
Cycle-accurate RTL descriptions with languages like VHDL, Verilog, SystemVerilog, or Chisel.
Loosely-timed or un-timed behavioral descriptions with SystemC or C/C++ that get synthesized into RTL with high-level synthesis (HLS) tools. ESP currently supports the three main commercial HLS tools: Cadence Stratus HLS, Mentor Catapult, and Xilinx Vivado HLS.
Domain-specific libraries for deep learning like Keras TensorFlow, PyTorch, and ONNX, for which ESP offers a flow combining HLS tools with hls4ml, an OSH project (28; J. Duarte, S. Han, P. Harris, S. Jindariani, E. Kreinar, B. Kreis, J. Ngadiuba, M. Pierini, R. Rivera, N. Tran, and Z. Wu (2018)).
HLS-Based Accelerator Design. For the HLS-based flows, ESP facilitates the job of the accelerator designer by providing ESP-compatible accelerator templates, HLS-ready skeleton specifications, multiple examples, and step-by-step tutorials for each flow.
The push in the adoption of HLS from C/C++ specifications has many reasons: (1) a large codebase of algorithms written in these languages, (2) a simplified hardware/software co-design (since most embedded software is in C), and (3) a thousand-fold faster functional execution of the specification than the counterpart RTL simulation. On the other hand, HLS from C/C++ has also shown some limitations because of the lack of ability to specify or accurately infer concurrency, timing, and communication properties of the hardware systems. HLS flows based on the IEEE-standard language SystemC overcome these limitations, thus making SystemC the de-facto standard to model protocols and control-oriented applications at a level higher than RTL.
In ESP, we support and encourage the use of both C/C++ and SystemC flows for HLS and we have defined a set of guidelines to support the designers in porting an application to an HLS-ready format. The ideal starting point is a self-contained description of the computation kernel, written in a subset of the C/C++ language (Nane et al., 2016): a limited use of pointers and the absence of dynamic memory allocation and recursion are important; also, aside from common mathematical functions, no external library functions should be used. This initial software transformation is oftentimes the most important step to obtain good quality source code for HLS (Mantovani et al., 2016).
The designer of an ESP accelerator should aim at a well-structured description that partitions the specification into concurrent functional blocks. The goal is to obtain any synthesizable specification that enables the exploration of a vast design space, by evaluating many micro-architectural and optimization choices. Figure 5 shows the relationship between the C/C++/SystemC design space and the RTL design space. The HLS tools provide a rich set of configuration knobs to obtain a variety of RTL implementations, each corresponding to a different cost-performance tradeoff point (Liu et al., 2012; Liu and Carloni, 2013). The knobs are push-button directives of the HLS tool represented by the green arrows. Designers may also perform manual transformations of the specification (orange arrows) to explore the design space while preserving the functional behavior. For example, they can expose parallelism by removing false dependencies or they can reduce resource utilization by encapsulating sections of code with similar behavior into callable functions (Mantovani et al., 2016).
Accelerator Structure. The ESP accelerators are based on the loosely-coupled model (Cota et al., 2015). They are programmed like devices by applications that invoke device drivers with standard system calls, such as open and ioctl. They perform coarse-grained computations while exchanging large data sets with the memory hierarchy. Figure 6 shows the structure and interface common to all ESP accelerators. The interface channels allow the accelerator to (1) communicate with the CPU via memory-mapped registers (conf_info), (2) program the DMA controller or interact with other accelerators (load_ctrl and store_ctrl), (3) exchange data with the memory hierarchy or other accelerators, (load_chnl and store_chnl), and (4) notify its completion back to the software application (acc_done).
These channels are implemented with latency-insensitive communication primitives, which HLS tools commonly provide as libraries (e.g. Mentor MatchLib Connections (Khailany et al., 2018), Cadence Flex Channels (M. Meredith, 2008), Xilinx ap_fifo). These primitives preserve functional correctness in the presence of latency variation both in the computation within the accelerator and in the communication across the NoC (Carloni, 2015). This is obtained by adding valid and ready signals to the channels. The valid signal indicates that the value of the data bundle is valid in the current clock cycle, while the ready signal is de-asserted to apply backpressure. The latency-insensitive nature of ESP accelerators allows designers to fully exploit the ability of HLS to produce many alternative RTL implementations, which are not strictly equivalent from an RTL viewpoint (i.e., they do not produce the same timed sequence of outputs for any valid input sequence), but they are members of a latency-equivalent design class (Carloni et al., 2001). Each member of this class can be seamlessly replaced with another one, depending on performance and cost targets (Piccolboni et al., 2017).
The execution flow of an ESP accelerator consists of four phases, configure, load, compute, and store, as shown in Figure 6. A software application configures, checks, and starts the accelerator via memory-mapped registers. During the load and store phases, the accelerator interacts with the DMA controller, interleaving data exchanges between the system and the accelerator’s private local memory (PLM) with computation. When the accelerator completes its task, an interrupt resumes the software for further processing.
For better performance, the accelerator can have one or more parallel computation components that interact with the PLM. The organization of the PLM itself is typically customized for the given accelerator, with multiple banks and ports. For example, the designer can organize it as a circular or ping-pong buffers to support the pipelining of computation and transfers with the external memory or other accelerators. Designers can leverage PLM generators (Pilato et al., 2017) to implement many different memory subsystems, each optimized for a specific combination of HLS knobs settings.
Accelerator Behavior. The charts of Figure 7 show the behavior of two concurrent ESP accelerators (ACC0 and ACC1) in three possible scenarios. The two accelerators work in a producer-consumer fashion: ACC0 generates data that ACC1 takes as inputs. The accelerators are executed two times and concurrently; the consumer starts as soon as the data is ready; finally, both the accelerators perform burst of load and store DMA transactions, in red and brown respectively. The completion of the configuration phase and interrupt request (acc_done) are marked with CFG and IRQ, respectively.
In the top scenario, the two accelerators communicate via external memory. First, the producer ACC0 runs and stores the resulting data in main memory. Upon completion of the producer, the consumer ACC1 starts and accesses the data in main memory; concurrently, the producer ACC0 can run a second time. The data exchange happens through memory at the granularity of the whole accelerator data set. This scenario is a virtual pipeline of ESP accelerators through memory. Ping-pong buffering on the PLM for both load and store phases allows the overlap of computation and communication. In addition, load and store phases are allowed to overlap. This is only possible by assuming to have dedicated memory channels for each accelerator (e.g. two ESP memory tiles). As long as the NoC and memory bandwidth are not saturated, the performance overhead is limited only to the driver run time and the interrupt handling procedures. We consider this scenario ideal.
In complex SoCs, it is reasonable to expect resource contention and delays with the main memory. This can potentially limit the latency and throughput of the accelerators, as shown in the middle scenario of Figure 7, where some of the DMA transactions get delayed for both the producer and consumer accelerators. The ESP library and API allows designers to replace the described software pipeline, with an actual pipeline of accelerators, based on point-to-point communication (P2P) over the NoC. The communication method does not need to be chosen at design time; instead, special configuration registers are used to overwrite the default DMA behavior. Beside relieving memory contention, P2P communication can actually improve latency and throughput of communicating accelerators, as shown in the bottom scenario of Figure 7. Here, each output transaction of the producer ACC0 is matched to an input transaction of the consumer ACC1 (in green). Differently from the previous scenarios, the data exchange via P2P happens at a smaller granularity: a single store transaction of the producer ACC0 is a valid input for the consumer ACC1. A designer should take into account this assumption when designing accelerators for a specific task.
Accelerator Templates and Code Generator. ESP provides the designers with a set of accelerator templates for each of the HLS-based design flows. These templates leverage concepts of object-oriented programming and class inheritance to simplify the design of the accelerators in C/C++ or SystemC and enforce the interface and structure previously described. They also implicitly address the differences existing among the various HLS tools and input specification languages. For example, the latency-insensitive primitives, which come with the different vendors, may have slightly different APIs, e.g. Put()/Get() vs. Read()/Write(), or timing behavior. With some HLS tools, the designer has to specify some extra wait() statements in SystemC to generate the correct RTL code. In the case of C/C++ designs a combination of HLS directives and coding style must be followed to ensure that extra memories are not inadvertently inferred and the phases are correctly synchronized.
Next to templates, ESP provides a further aid for the accelerator design: an interactive environment that generates a fully-working and HLS-ready accelerator skeleton from a set of parameters passed by the designer. The skeleton comes with a unit testbench, synthesis and simulation scripts, a bare-metal driver, a Linux driver, and a sample test application. This is the first step of the accelerator design flow, as shown on the top-right of Figure 8. The skeleton is a basic specification that uses the templates and contains placeholder for manual accelerator-specific customizations. The parameters passed by the designers include: unique name and ID, desired HLS tool flow, a list of application-specific configuration registers, bit-width of the data tokens, size of the data set and number of batches of data sets to be executed without interrupting the CPU. Next to application-specific information, designers can choose architectural parameters that set the minimum required size of the PLM and the maximum memory footprint of the application that invokes the accelerator. These parameters have effect on the generated accelerator skeleton, device-driver, test application, and on the configuration parameters for the ESP socket that will host the accelerator.
Starting from the automatically generated skeleton, designers must customize the accelerator computation phase, leveraging the software implementation of the target computation kernel as a reference. In addition, they are responsible for customizing the input generation and output validation functions in the unit testbench and in the bare-metal and Linux test applications. Finally, in case of complex data access patterns, they may also need to extend the communication part of the accelerator and define a more complex structure for the PLM. The ESP release offers a set of online tutorials that describe these steps in details with simple examples, which demonstrate how the first version of a new accelerator can be designed, integrated and tested on FPGA in a few hours (Columbia SLD Group, 2019).
The domain specific flow for embedded machine learning is fully automated(Giri et al., 2020)
. The accelerator and the related software drivers and application are generated in their entirety from the neural-network model. ESP automatically generates also the accelerator tile socket and a wrapper for the accelerator logic.
4.2. Third-Party Accelerator Integration
For existing accelerators, ESP provides a third-party accelerator integration flow (TPF). The TPF skips all the steps necessary to design a new accelerator and goes directly to SoC integration. The designer must provide some information about the existing IP block and a simple wrapper to connect the wires of the accelerator’s interface to the ESP socket. Specifically, the designer must fill in a short XML file with a unique accelerator name and ID, the list and polarity of the reset signals, the list of clock signals, an optional prefix for the AXI master interface in the wrapper, the user-defined width of AXI optional control signals and the type of interrupt request (i.e., level or edge sensitive). In addition, the TPF requires the list of RTL source files, including Verilog, SystemVerilog, VHDL and VHDL packages, a custom Makefile to compile the third-party software and device drivers, and the list of executable files, libraries and other binary objects needed to control the accelerator.
Currently, ESP provides adapters for AXI master (32 and 64 bits), AHB master (32 bits) and AXI-Lite or APB slave (32 bits). As long as the target accelerator is compliant with these standard bus protocols, the Verilog top-level wrapper consists of a simple wire assignment to expose bus ports to the ESP socket and connect any non-standard input port of the third-party accelerator (e.g. disable test mode), if present. After these simple manual steps, ESP takes care of the whole integration automatically. We used the TPF to integrate the NVDLA (41). An online tutorial in the ESP release demonstrates the design of a complete SoC with multiple NVDLA tiles, multiple memory tiles and the Ariane RISC-V processor. This system can run up to four concurrent machine-learning tasks using the original NVDLA software stack111A minor patch was required to run multiple NVDLAs in a Linux environment. (Giri et al., 2020).
4.3. SoC Flow
The center and the left portion of Figure 8 illustrate the agile SoC development enabled by ESP. Both the ESP and third-party accelerator flows contribute to the pool of IP components that can be selected to build an SoC instance. The ESP GUI guides the designers through an interactive SoC design flow that allows them to: choose the number, types and positions of tiles, select the desired Pareto-optimal design point from the HLS flows for each accelerator, select the desired processor core among those available, determine the cache hierarchy configuration, select the clock domains for each tile, and enable the desired system monitors. The GUI writes a configuration file that the ESP build flow can include to generate RTL sockets, the system memory mapping, NoC routing tables, the device tree for the target processor architecture, software header files, and configuration parameters for the proxy components.
A single make target is sufficient to generate the bitstream for one of the supported Xilinx evaluation boards (VCU128, VCU118 and VC707) and proFPGA prototyping FPGA modules (XCVU440 and XC7V2000T). Another single make target compiles Linux and creates a default root file system that includes accelerators’ drivers and test applications, together with all necessary initialization scripts to load the ESP library and memory allocator. If properly listed during the TPF, the software stack for the third-party accelerators is loaded into the Linux image as well. When the FPGA implementation is ready, users can load the boot loader onto the ESP boot memory and the Linux image onto the external DRAM with the ESP Link application and the companion module on the auxiliary tile. Next ESP Link sends a soft reset to the processor cores, thus starting the execution from the boot loader. Users can monitor the boot process via UART, or log in with SSH after Linux boot completes. The online tutorials explain how to properly wire the FPGA boards to a simple home router to ensure connectivity.
In addition to FPGA prototyping, designers can run full-system RTL simulations of a bare-metal program. If monitoring the FPGA with the UART serial interface, they can run bare-metal applications on FPGA as well. The development of bare-metal and Linux applications for an ESP SoC is facilitated by the ESP software stack described in Section 3. The ESP release offers several examples.
The agile ESP flow allowed us to rapidly prototype many complex SoCs on FPGA, including:
A multi-core SoC booting Linux SMP with tens of accelerators, multiple DRAM controllers, and dynamically reconfigurable cache coherence models (Giri et al., 2019).
A RISC-V based SoCs where deep learning applications running on top of Linux invoke loosely-coupled accelerators designed with multiple ESP accelerator design flows (Giri et al., 2020).
A RISC-V based SoCs with multiple instances of the NVDLA controlled by the RISC-V Ariane processor (Giri et al., 2020).
5. Related Work
The OSH movement is supported by multiple SoC design platforms, many based on the RISC-V open-standard ISA (Asanovic and Patterson, 2014; Greengard, 2020). The Rocket Chip Generator is an OSH project that leverages the Chisel RTL language to construct SoCs with multiple RISC-V cores connected through a coherent TileLink bus (Lee et al., 2016). The Chipyard framework inherits Rocket Chip’s Chisel-based parameterized hardware generator methodology and also allows the integration of IP blocks written in other RTL languages, via a Chisel wrapper, as well as domain-specific accelerators (Amid et al., 2020). Celerity used the custom co-processor interface RoCC
of the Rocket chip to integrate five Rocket cores with an array of 496 simpler RISC-V cores and a binarized neural network (BNN) accelerator, which was designed with HLS, into a 385-million transistor SoC(Davidson et al., 2018). HERO is an FPGA-based research platform that allows the integration of a standard host multicore processor with programmable manycore accelerators composed of clusters of RISC-V cores based on the PULP platform (A. Kurth, P. Vogel, A. Capotondi, A. Marongui, and L. Benini (2017); 46; D. Rossi, I. Loi, F. Conti, G. Tagliavini, A. Pullini, and A. Marongiu (2014)). OpenPiton was the first open-source SMP Linux-booting RISC-V multicore processor (Balkind et al., 2019). It supports the research of heterogeneous ISAs and provides a coherence protocol that extends across multiple chips (Balkind et al., 2020; Princeton Parallel Group, ). Blackparrot is a multicore RISC-V architecture that offers some support for the integration of loosely-coupled accelerators (Petrisko et al., 2020); currently, it provides two of the four cache-coherence options supported by ESP: fully-coherent and non-coherent.
While most of these platforms are built with a processor-centric perspective, ESP promotes a system-centric perspective with a scalable NoC-based architecture and a strong focus on the integration of heterogeneous components, including particularly loosely-coupled accelerators. Another feature distinguishing ESP from the other open-source SoC platforms is the flexible system-level design methodology that embraces a variety of specification languages and synthesis flows, while promoting the use of HLS to facilitate the design and integration of accelerators.
In summary, with ESP we aim at contributing to the open-source movement by supporting the realization of more scalable architectures for SoCs that integrate more heterogeneous components, thanks to a more flexible design methodology that accommodates different specification languages and design flows. Conceived as a heterogeneous integration platform and tested through years of teaching at Columbia University, ESP is naturally suited to foster collaborative engineering of SoCs across the OSH community.
Acknowledgements.Over the years, the ESP project has been supported in part by DARPA (C#: HR001113C0003 and HR001118C0122), the ARO (G#: W911NF-19-1-0476), the NSF (A#: 1219001), and C-FAR (C#:2013-MA-2384), an SRC STARnet center. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, the Department of Defense or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
- Chipyard: Integrated Design, Simulation, and Implementation Framework for Custom SoCs. IEEE Micro 40 (4), pp. 10–21. Cited by: §5.
-  Ariane. Note: https://github.com/pulp-platform/ariane Cited by: §1, §2.2.
- The Case for Open Instruction Sets. Microprocessor Report. Cited by: §1, §5.
- OpenPiton at 5: a nexus for open and agile hardware design. IEEE Micro 40 (4), pp. 22–31. Cited by: §5.
- OpenPiton+ Ariane: The First Open-Source, SMP Linux-booting RISC-V System Scaling From One to Many Cores. In Workshop on Computer Architecture Research with RISC-V (CARRV), pp. 1–6. Cited by: §5.
- The future of microprocessors. Communication of the ACM 54, pp. 67–77. Cited by: §1.
- Teaching Heterogeneous Computing with System-Level Design Methods. In Workshop on Computer Architecture Education (WCAE), pp. 1–8. Cited by: §1.
- A methodology for “correct-by-construction” latency insensitive design. In Proc. of the International Conference on Computer-Aided Design, pp. 309–315. Cited by: §2.4.
- Theory of latency-insensitive design. IEEE Transactions on CAD of Integrated Circuits and Systems 20 (9), pp. 1059–1076. Cited by: §4.1.
- From Latency-Insensitive Design to Communication-Based System-Level Design. Proceedings of the IEEE 103 (11), pp. 2133–2151. Cited by: §2.4, §4.1.
- The Case for Embedded Scalable Platforms. In Proc. of the Design Automation Conference (DAC), pp. 17:1–17:6. Cited by: §1, §1.
- A cloud-scale acceleration architecture. In Proc. of the IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 1–13. Cited by: §1.
- Towards Heterogeneous Computing Platforms for Autonomous Driving. In Proc. of the International Conference on Embedded Software and Systems (ICESS), Cited by: §1.
-  () LEON3. Note: www.gaisler.com/index.php/products/processors/leon3 Cited by: §2.2.
- ESP Release. Note: www.esp.cs.columbia.edu Cited by: §1, §1, §4.1.
- An Analysis of Accelerator Coupling in Heterogeneous Architectures. In Proc. of the Design Automation Conference (DAC), pp. 202:1–202:6. Cited by: §1, §2.4, §4.1.
- Domain-specific hardware accelerators. Communication of the ACM 63 (7), pp. 48–57. Cited by: §1.
- The Celerity Open-Source 511-Core RISC-V Tiered Accelerator Fabric: Fast Architectures and Design Methodologies for Fast Chips. IEEE Micro 38 (2), pp. 30–41. Cited by: §5.
- Fast inference of deep neural networks in FPGAs for particle physics. Journal of Instrumentation 13 (07), pp. P07027–P07027. Cited by: 3rd item.
- An open-source VHDL IP library with plug & play configuration. Building the Information Society. Cited by: §2.5.
- Ariane + NVDLA: Seamless Third-Party IP Integration with ESP. In Workshop on Computer Architecture Research with RISC-V (CARRV), Cited by: §2.4, 4th item, §4.2.
- ESP4ML: Platform-Based Design of Systems-on-Chip for Embedded Machine Learning. In Proc. of the Conference on Design, Automation, and Test in Europe (DATE), pp. 1049–1054. Cited by: §1, §3, 3rd item, §4.1.
- NoC-Based Support of Heterogeneous Cache-Coherence Models for Accelerators. In Proc. of the International Symposium on Networks-on-Chip (NOCS), pp. 1:1–1:8. Cited by: §1.
- Accelerators & Coherence: An SoC Perspective. IEEE Micro 38 (6), pp. 36–45. Cited by: §2.4.
- Runtime Reconfigurable Memory Hierarchy in Embedded Scalable Platforms. In Proc. of the Asia and South Pacific Design Automation Conference (ASPDAC), pp. 719–726. Cited by: §2.3, §2, 2nd item.
- Will RISC-V revolutionize computing?. Communications of the ACM 63 (5), pp. 30–32. Cited by: §5.
- Kickstarting Semiconductor Innovation with Open Source Hardware. IEEE Computer 50 (6), pp. 50–59. Cited by: §1.
-  HLS4ML. Note: https://fastmachinelearning.org/hls4ml/ Cited by: 3rd item.
- Computing’s energy problem (and what we can do about it). In International Solid-State Circuits Conference (ISSCC), pp. 10–14. Cited by: §1.
- A Domain-Specific Architecture for Deep Neural Networks. Communications of the ACM 61 (9), pp. 50–59. Cited by: §1.
- A Modular Digital VLSI Flow for High-Productivity SoC Design. In Proc. of the Design Automation Conference (DAC), pp. 1–6. Cited by: §4.1.
- HERO: Heterogeneous Embedded Research Platform for Exploring RISC-V Manycore Accelerators on FPGA. In Workshop on Computer Architecture Research with RISC-V (CARRV), pp. 1–7. Cited by: §5.
- An Agile Approach to Building RISC-V Microprocessors. IEEE Micro 36 (2), pp. 8–20. Cited by: §5.
- On Learning-based Methods for Design-Space Exploration with High-Level Synthesis. In Proc. of the Design Automation Conference (DAC), pp. 1–7. Cited by: §4.1.
- Compositional System-Level Design Exploration with Planning of High-Level Synthesis. In Proc. of the Conference on Design, Automation, and Test in Europe (DATE), pp. 641–646. Cited by: §4.1.
- High-level SystemC Synthesis with Forte’s Cynthesizer. In High-Level Synthesis, pp. 75–97. Cited by: §4.1.
- Handling large data sets for high-performance embedded applications in heterogeneous systems-on-chip. In cases, pp. 1–10. Cited by: §1, §2.4, §3.
- An FPGA-based infrastructure for fine-grained DVFS analysis in high-performance embedded systems. In Proc. of the Design Automation Conference (DAC), pp. 157:1–157:6. Cited by: §1, 1st item.
- High-level Synthesis of Accelerators in Embedded Scalable Platforms. In Proc. of the Asia and South Pacific Design Automation Conference (ASPDAC), pp. 204–211. Cited by: §1, §4.1, §4.1.
- A survey and evaluation of fpga high-level synthesis tools. IEEE Transactions on CAD of Integrated Circuits and Systems 35 (10), pp. 1591–1604. Cited by: §4.1.
-  NVIDIA Deep Learning Accelerator (NVDLA). Note: www.nvdla.org Cited by: §1, §4.2.
- BlackParrot: an agile open-source risc-v multicore for accelerator socs. IEEE Micro 40 (4), pp. 93–102. Cited by: §5.
- COSMOS: coordination of high-level synthesis and memory optimization for hardware accelerators. ACM Transactions on Embedded Computing Systems 16 (5s), pp. 150:1–150:22. Cited by: §4.1.
- System-Level Optimization of Accelerator Local Memory for Heterogeneous Systems-on-Chip. IEEE Transactions on CAD of Integrated Circuits and Systems 36 (3), pp. 435–448. Cited by: §4.1.
-  OpenPiton. Note: https://parallel.princeton.edu/openpiton/ Cited by: §5.
-  PULP. Note: https://pulp-platform.org/ Cited by: §5.
- Energy Efficient Parallel Computing on the PULP Platform with Support for OpenMP. In Convention of Electrical Electronics Engineers in Israel (IEEEI), Cited by: §5.
- The Aladdin Approach to Accelerator Design and Modeling. IEEE Micro 35 (3), pp. 58–70. Cited by: §1.
- Virtual channels and multiple physical networks: two alternatives to improve NoC performance. IEEE Transactions on CAD of Integrated Circuits and Systems 32 (12), pp. 1906–1919. Cited by: §2.
- System-Level Design of Networks-on-Chip for Heterogeneous Systems-on-Chip. In Proc. of the International Symposium on Networks-on-Chip (NOCS), pp. 1–6. Cited by: §2.1.
- The cost of application-class processing: energy and performance analysis of a Linux-ready 1.7-GHz 64-Bit RISC-V core in 22-nm FDSOI technology. IEEE Transactions on Very Large Scale Integration Systems 27 (11), pp. 2629–2640. Cited by: §1, §2.2.