The Instruction Set Architecture (ISA) has long been an essential abstraction in the computing world. Describing complex hardware behavior using a manageable set of instructions has largely decoupled computer architects from compiler engineers, allowing the fields to make significant progress independently. During the reign of Moore’s Law, this decoupled advancement allowed software to benefit from regular improvements in hardware. But now, as Moore’s Law fails to hold, researchers have been increasingly searching for innovations which cross the ISA boundary and challenge the assumption of a rigid software–hardware border.
Especially interesting is the idea of software–hardware codesign: the process of designing software and hardware in lockstep, rather than solidifying the ISA and designing in isolation. A core problem in software–hardware codesign is in the sheer size of the design space. Without a set ISA to constrain the software–hardware interface, the design space explodes to all possible combinations of hardware and software, with all possible interfaces between them.
This work presents a strategy for managing the massive hardware–software design space within the constrained domain of machine learning inference workloads and accelerators. We first propose EngineIR, a new language for representing machine learning hardware and software in a single program. Then, using equality graphs—a data structure from the compilers literature—we suggest a method for efficiently enumerating the design space by performing rewrites over our representation.
2 Overview of Solution
Within the scope of this work, we view hardware-accelerated machine learning inference workloads as being comprised of three distinct hardware or software components. First, at their base, the workloads are built on calls to fixed size kernels, such as a matrix multiplication; a machine learning accelerator accelerates workloads by providing finely tuned hardware engines implementing these kernels. Second, software schedules expand fixed-size kernel calls to take arbitrary-sized input by using loops or parallelism to call kernels multiple times. Finally, some concept of storage hardware carries intermediate values between kernel invocations.
To explore hardware–software splits, we begin with ML workloads written in Relay, which is the intermediate representation used by the TVM compiler. Relay represents a machine learning workload as a series of kernel calls, but does not make explicit the underlying hardware and software components described above.
From Relay, we lower to EngineIR, a Relay-like language which fully reifies the hardware engines, hardware storage buffers, and software schedules underlying Relay programs. EngineIR engines represent underlying computational hardware. An engine is declared with a set of parameters (H, W, C, and K, in Figure 1) corresponding to the parameters of the underlying hardware design. Each usage of a Relay operator will be converted to a call to an EngineIR engine instantiation with concrete parameters. During this lowering process, a software schedule will also be created, implementing the kernel using the underlying hardware engine. Similarly, each converted call will be given an explicit storage buffer for its output.
Figure 1 shows an engine declaration for a convolution engine, including parameters for the input size (height, width, channels, and kernel size, respectively). Additionally, it shows a Relay nn.conv2d call being reified into a software schedule over a concrete engine and concrete storage.
Once a workload is lowered from Relay to EngineIR, we enumerate the space of functionally equivalent hardware–software designs using equality graphs (e-graphs). E-graphs are able to represent an exponential number of equivalent programs efficiently; this key feature is what makes our hardware–software search space manageable. To enumerate the search space, we first transform the EngineIR program into its equivalent e-graph. We then perform rewrites over the e-graph. These rewrites encode transformations which alter the hardware–software split of a program, but preserve functional equivalence.
shows two such rewrites. The program is a single call to a 128-bit wide ReLU, a common machine learning kernel. Initially, the e-graph has a single node, representing a single usage of a 128-bit wide ReLU hardware engine. Rewrite 1 encodes the knowledge that we can change the size of ReLU units in hardware by adding a software schedule which loops over the unit. The e-graph is expanded with a new loop-based program; the dotted line indicates that the new program is equivalent to the old. Rewrite 2 encodes the knowledge that we can parallelize a software for loop by instantiating more hardware. By incorporating a large body of such rewrites, and running them for a number of iterations, the e-graph will expand to include an exponential number of equivalent hardware–software programs. When running an entire workload through a series of rewrites, we expect to see a wide range of design points represented. For example, we should see designs which instantiate an engine for every kernel invocation, alongside designs which use complex software schedules and very little hardware. From here, we can attempt to extract promising design candidates; however, the extraction procedure is out of the scope of this early work.
3 Evaluation Methodology
The goal of this work is to provide a strategy for enumerating a massive design space. As such, the evaluation will focus on our ability to generate a diverse set of useful designs. A diverse set of designs should include many design points which differ significantly from each other. However, this does not help in design space exploration if the designs themselves are not worth exploring. Thus, the set of designs should also include many useful design points; that is, designs which could turn into efficient hardware.
4 Related Work
There are many recent examples of automated machine learning hardware generation, such as 
, which demonstrates compilation from TensorFlow to FPGAs. Their solution to the hardware–software split is to instantiate one engine for each type of kernel in the workload. While this produces competent designs, the goal of this work is to allow for the easy enumeration and exploration of more complex (but potentially more profitable) splits.
TensorFlow to cloud fpgas: tradeoffs for accelerating deep neural networks. In 2019 29th International Conference on Field Programmable Logic and Applications (FPL), pp. 360–366. Cited by: §4.
-  Techniques for program verification. Cited by: §2.
-  (2018) Relay: a new ir for machine learning frameworks. Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages - MAPL 2018. External Links: Cited by: §2.