-
Relay: A High-Level IR for Deep Learning
Frameworks for writing, compiling, and optimizing deep learning (DL) mod...
read it
-
Relay: A New IR for Machine Learning Frameworks
Machine learning powers diverse services in industry including search, t...
read it
-
The Deep Learning Compiler: A Comprehensive Survey
The difficulty of deploying various deep learning (DL) models on diverse...
read it
-
Domain-Specific Multi-Level IR Rewriting for GPU
Traditional compilers operate on a single generic intermediate represent...
read it
-
An Orchestrated Empirical Study on Deep Learning Frameworks and Platforms
Deep learning (DL) has recently achieved tremendous success in a variety...
read it
-
Compiling Diderot: From Tensor Calculus to C
Diderot is a parallel domain-specific language for analysis and visualiz...
read it
-
Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code
This paper introduces Tiramisu, a polyhedral framework designed to gener...
read it
Relay: A High-Level Compiler for Deep Learning
Frameworks for writing, compiling, and optimizing deep learning (DL) models have recently enabled progress in areas like computer vision and natural language processing. Extending these frameworks to accommodate the rapidly diversifying landscape of DL models and hardware platforms presents challenging tradeoffs between expressivity, composability, and portability. We present Relay, a new compiler framework for DL. Relay's functional, statically typed intermediate representation (IR) unifies and generalizes existing DL IRs to express state-of-the-art models. The introduction of Relay's expressive IR requires careful design of domain-specific optimizations, addressed via Relay's extension mechanisms. Using these extension mechanisms, Relay supports a unified compiler that can target a variety of hardware platforms. Our evaluation demonstrates Relay's competitive performance for a broad class of models and devices (CPUs, GPUs, and emerging accelerators). Relay's design demonstrates how a unified IR can provide expressivity, composability, and portability without compromising performance.
READ FULL TEXT
Comments
There are no comments yet.