Self-Claimed Assumptions in Deep Learning Frameworks: An Exploratory Study

04/29/2021
by   Chen Yang, et al.
0

Deep learning (DL) frameworks have been extensively designed, implemented, and used in software projects across many domains. However, due to the lack of knowledge or information, time pressure, complex context, etc., various uncertainties emerge during the development, leading to assumptions made in DL frameworks. Though not all the assumptions are negative to the frameworks, being unaware of certain assumptions can result in critical problems (e.g., system vulnerability and failures, inconsistencies, and increased cost). As the first step of addressing the critical problems, there is a need to explore and understand the assumptions made in DL frameworks. To this end, we conducted an exploratory study to understand self-claimed assumptions (SCAs) about their distribution, classification, and impacts using code comments from nine popular DL framework projects on GitHub. The results are that: (1) 3,084 SCAs are scattered across 1,775 files in the nine DL frameworks, ranging from 1,460 (TensorFlow) to 8 (Keras) SCAs. (2) There are four types of validity of SCAs: Valid SCA, Invalid SCA, Conditional SCA, and Unknown SCA, and four types of SCAs based on their content: Configuration and Context SCA, Design SCA, Tensor and Variable SCA, and Miscellaneous SCA. (3) Both valid and invalid SCAs may have an impact within a specific scope (e.g., in a function) on the DL frameworks. Certain technical debt is induced when making SCAs. There are source code written and decisions made based on SCAs. This is the first study on investigating SCAs in DL frameworks, which helps researchers and practitioners to get a comprehensive understanding on the assumptions made. We also provide the first dataset of SCAs for further research and practice in this area.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/13/2023

Automatic Identification and Extraction of Assumptions on GitHub

In software development, due to the lack of knowledge or information, ti...
research
04/17/2022

On Reporting Performance and Accuracy Bugs for Deep Learning Frameworks: An Exploratory Study from GitHub

The tremendous success of Deep Learning (DL) has significantly boosted t...
research
02/25/2019

A detailed comparative study of open source deep learning frameworks

Deep Learning (DL) is one of the hottest trends in machine learning as D...
research
03/30/2023

Analysis of Failures and Risks in Deep Learning Model Converters: A Case Study in the ONNX Ecosystem

Software engineers develop, fine-tune, and deploy deep learning (DL) mod...
research
09/15/2019

An Empirical Study towards Characterizing Deep Learning Development and Deployment across Different Frameworks and Platforms

Deep Learning (DL) has recently achieved tremendous success. A variety o...
research
12/13/2020

Comparing the costs of abstraction for DL frameworks

High level abstractions for implementing, training, and testing Deep Lea...
research
02/04/2021

Ivy: Templated Deep Learning for Inter-Framework Portability

We introduce Ivy, a templated Deep Learning (DL) framework which abstrac...

Please sign up or login with your details

Forgot password? Click here to reset