DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization

07/11/2023
by   Simin Chen, et al.
0

DL compiler's primary function is to translate DNN programs written in high-level DL frameworks such as PyTorch and TensorFlow into portable executables. These executables can then be flexibly executed by the deployed host programs. However, existing DL compilers rely on a tracing mechanism, which involves feeding a runtime input to a neural network program and tracing the program execution paths to generate the computational graph necessary for compilation. Unfortunately, this mechanism falls short when dealing with modern dynamic neural networks (DyNNs) that possess varying computational graphs depending on the inputs. Consequently, conventional DL compilers struggle to accurately compile DyNNs into executable code. To address this limitation, we propose , a general approach that enables any existing DL compiler to successfully compile DyNNs. tackles the dynamic nature of DyNNs by introducing a compilation mechanism that redistributes the control and data flow of the original DNN programs during the compilation process. Specifically, develops program analysis and program transformation techniques to convert a dynamic neural network into multiple sub-neural networks. Each sub-neural network is devoid of conditional statements and is compiled independently. Furthermore, synthesizes a host module that models the control flow of the DyNNs and facilitates the invocation of the sub-neural networks. Our evaluation demonstrates the effectiveness of , achieving a 100% success rate in compiling all dynamic neural networks. Moreover, the compiled executables generated by exhibit significantly improved performance, running between 1.12× and 20.21× faster than the original DyNNs executed on general-purpose DL frameworks.

READ FULL TEXT
research
12/04/2018

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs

The rapid evolution of deep neural networks is demanding deep learning (...
research
01/23/2022

Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs

Imperative programming allows users to implement their deep neural netwo...
research
12/31/2021

Statistical Program Slicing: a Hybrid Slicing Technique for Analyzing Deployed Software

Dynamic program slicing can significantly reduce the code developers nee...
research
08/22/2023

Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Graph Execution

Efficiency is essential to support responsiveness w.r.t. ever-growing da...
research
07/05/2021

Design Smells in Deep Learning Programs: An Empirical Study

Nowadays, we are witnessing an increasing adoption of Deep Learning (DL)...
research
12/11/2017

Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks

Recent deep learning (DL) models have moved beyond static network archit...
research
02/20/2022

ExAIS: Executable AI Semantics

Neural networks can be regarded as a new programming paradigm, i.e., ins...

Please sign up or login with your details

Forgot password? Click here to reset