Challenges in Migrating Imperative Deep Learning Programs to Graph Execution: An Empirical Study

01/24/2022
by   Tatiana Castro Vélez, et al.
0

Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code that supports symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development tends to produce DL code that is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged but at the expense of run-time performance. While hybrid approaches aim for the "best of both worlds," the challenges in applying them in the real world are largely unknown. We conduct a data-driven analysis of challenges – and resultant bugs – involved in writing reliable yet performant imperative DL code by studying 250 open-source projects, consisting of 19.7 MLOC, along with 470 and 446 manually examined code patches and bug reports, respectively. The results indicate that hybridization: (i) is prone to API misuse, (ii) can result in performance degradation – the opposite of its intention, and (iii) has limited application due to execution mode incompatibility. We put forth several recommendations, best practices, and anti-patterns for effectively hybridizing imperative DL code, potentially benefiting DL practitioners, API designers, tool developers, and educators.

READ FULL TEXT

page 4

page 6

page 7

research
08/22/2023

Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Graph Execution

Efficiency is essential to support responsiveness w.r.t. ever-growing da...
research
01/23/2022

Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs

Imperative programming allows users to implement their deep neural netwo...
research
06/05/2023

Security Knowledge-Guided Fuzzing of Deep Learning Libraries

There have been many Deep Learning (DL) fuzzers proposed in the literatu...
research
03/07/2023

ADELT: Transpilation Between Deep Learning Frameworks

We propose Adversarial DEep Learning Transpiler (ADELT) for source-to-so...
research
04/17/2022

On Reporting Performance and Accuracy Bugs for Deep Learning Frameworks: An Exploratory Study from GitHub

The tremendous success of Deep Learning (DL) has significantly boosted t...
research
10/04/2022

Multifaceted Hierarchical Report Identification for Non-Functional Bugs in Deep Learning Frameworks

Non-functional bugs (e.g., performance- or accuracy-related bugs) in Dee...
research
07/01/2020

PrototypeML: A Neural Network Integrated Design and Development Environment

Neural network architectures are most often conceptually designed and de...

Please sign up or login with your details

Forgot password? Click here to reset