Clones in Deep Learning Code: What, Where, and Why?
Deep Learning applications are becoming increasingly popular. Developers of deep learning systems strive to write more efficient code. Deep learning systems are constantly evolving, imposing tighter development timelines and increasing complexity, which may lead to bad design decisions. A copy-paste approach is widely used among deep learning developers because they rely on common frameworks and duplicate similar tasks. Developers often fail to properly propagate changes to all clones fragments during a maintenance activity. To our knowledge, no study has examined code cloning practices in deep learning development. Given the negative impacts of clones on software quality reported in the studies on traditional systems, it is very important to understand the characteristics and potential impacts of code clones on deep learning systems. To this end, we use the NiCad tool to detect clones from 59 Python, 14 C# and 6 Java-based deep learning systems and an equal number of traditional software systems. We then analyze the frequency and distribution of code clones in deep learning and traditional systems. We do further analysis of the distribution of code clones using location-based taxonomy. We also study the correlation between bugs and code clones to assess the impacts of clones on the quality of the studied systems. Finally, we introduce a code clone taxonomy related to deep learning programs and identify the deep learning system development phases in which cloning has the highest risk of faults. Our results show that code cloning is a frequent practice in deep learning systems and that deep learning developers often clone code from files in distant repositories in the system. In addition, we found that code cloning occurs more frequently during DL model construction. And that hyperparameters setting is the phase during which cloning is the riskiest, since it often leads to faults.
READ FULL TEXT