Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Graph Execution

08/22/2023
by   Raffi Khatchadourian, et al.
0

Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code – supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the "best of both worlds," using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution – avoiding performance bottlenecks and semantically inequivalent results. We present our ongoing work on an automated refactoring approach that assists developers in specifying whether and how their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs at run-time in a semantics-preserving fashion. The approach, based on a novel tensor analysis specifically for imperative DL code, consists of refactoring preconditions for automatically determining when it is safe and potentially advantageous to migrate imperative DL code to graph execution and modifying decorator parameters or eagerly executing code already running as graphs. The approach is being implemented as a PyDev Eclipse IDE plug-in and uses the WALA Ariadne analysis framework. We discuss our ongoing work towards optimizing imperative DL code to its full potential.

READ FULL TEXT
research
01/24/2022

Challenges in Migrating Imperative Deep Learning Programs to Graph Execution: An Empirical Study

Efficiency is essential to support responsiveness w.r.t. ever-growing da...
research
12/04/2018

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs

The rapid evolution of deep neural networks is demanding deep learning (...
research
07/11/2023

DyCL: Dynamic Neural Network Compilation Via Program Rewriting and Graph Optimization

DL compiler's primary function is to translate DNN programs written in h...
research
07/05/2021

Design Smells in Deep Learning Programs: An Empirical Study

Nowadays, we are witnessing an increasing adoption of Deep Learning (DL)...
research
11/01/2021

Collage: Automated Integration of Deep Learning Backends

Strong demands for efficient deployment of Deep Learning (DL) applicatio...
research
10/13/2022

Graph-based Neural Modules to Inspect Attention-based Architectures: A Position Paper

Encoder-decoder architectures are prominent building blocks of state-of-...
research
09/12/2023

Just-in-Time autotuning

Performance portability is a major concern on current architectures. One...

Please sign up or login with your details

Forgot password? Click here to reset