An Empirical Study on the Bugs Found while Reusing Pre-trained Natural Language Processing Models

11/30/2022
by   Rangeet Pan, et al.
0

In NLP, reusing pre-trained models instead of training from scratch has gained popularity; however, NLP models are mostly black boxes, very large, and often require significant resources. To ease, models trained with large corpora are made available, and developers reuse them for different problems. In contrast, developers mostly build their models from scratch for traditional DL-related problems. By doing so, they have control over the choice of algorithms, data processing, model structure, tuning hyperparameters, etc. Whereas in NLP, due to the reuse of the pre-trained models, NLP developers are limited to little to no control over such design decisions. They either apply tuning or transfer learning on pre-trained models to meet their requirements. Also, NLP models and their corresponding datasets are significantly larger than the traditional DL models and require heavy computation. Such reasons often lead to bugs in the system while reusing the pre-trained models. While bugs in traditional DL software have been intensively studied, the nature of extensive reuse and black-box structure motivates us to understand the different types of bugs that occur while reusing NLP models? What are the root causes of those bugs? How do these bugs affect the system? To answer these questions, We studied the bugs reported while reusing the 11 popular NLP models. We mined 9,214 issues from GitHub repositories and identified 984 bugs. We created a taxonomy with bug types, root causes, and impacts. Our observations led to several findings, including limited access to model internals resulting in a lack of robustness, lack of input validation leading to the propagation of algorithmic and data bias, and high-resource consumption causing more crashes, etc. Our observations suggest several bug patterns, which would greatly facilitate further efforts in reducing bugs in pre-trained models and code reuse.

READ FULL TEXT
research
03/08/2022

Toward Understanding Deep Learning Framework Bugs

DL frameworks are the basis of constructing all DL programs and models, ...
research
06/03/2019

A Comprehensive Study on Deep Learning Bug Characteristics

Deep learning has gained substantial popularity in recent years. Develop...
research
07/25/2019

Not All Bugs Are the Same: Understanding, Characterizing, and Classifying the Root Cause of Bugs

Modern version control systems such as Git or SVN include bug tracking m...
research
02/03/2021

Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks

Deep learning (DL) techniques are gaining more and more attention in the...
research
04/19/2021

Demystifying Regular Expression Bugs: A comprehensive study on regular expression bug causes, fixes, and testing

Regular expressions cause string-related bugs and open security vulnerab...
research
03/05/2023

An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry

Deep Neural Networks (DNNs) are being adopted as components in software ...
research
05/16/2021

SLGPT: Using Transfer Learning to Directly Generate Simulink Model Files and Find Bugs in the Simulink Toolchain

Finding bugs in a commercial cyber-physical system (CPS) development too...

Please sign up or login with your details

Forgot password? Click here to reset