Black Boxes, White Noise: Similarity Detection for Neural Functions

02/20/2023
by   Farima Farmahinifarahani, et al.
0

Similarity, or clone, detection has important applications in copyright violation, software theft, code search, and the detection of malicious components. There is now a good number of open source and proprietary clone detectors for programs written in traditional programming languages. However, the increasing adoption of deep learning models in software poses a challenge to these tools: these models implement functions that are inscrutable black boxes. As more software includes these DNN functions, new techniques are needed in order to assess the similarity between deep learning components of software. Previous work has unveiled techniques for comparing the representations learned at various layers of deep neural network models by feeding canonical inputs to the models. Our goal is to be able to compare DNN functions when canonical inputs are not available – because they may not be in many application scenarios. The challenge, then, is to generate appropriate inputs and to identify a metric that, for those inputs, is capable of representing the degree of functional similarity between two comparable DNN functions. Our approach uses random input with values between -1 and 1, in a shape that is compatible with what the DNN models expect. We then compare the outputs by performing correlation analysis. Our study shows how it is possible to perform similarity analysis even in the absence of meaningful canonical inputs. The response to random inputs of two comparable DNN functions exposes those functions' similarity, or lack thereof. Of all the metrics tried, we find that Spearman's rank correlation coefficient is the most powerful and versatile, although in special cases other methods and metrics are more expressive. We present a systematic empirical study comparing the effectiveness of several similarity metrics using a dataset of 56,355 classifiers collected from GitHub. This is accompanied by a sensitivity analysis that reveals how certain models' training related properties affect the effectiveness of the similarity metrics. To the best of our knowledge, this is the first work that shows how similarity of DNN functions can be detected by using random inputs. Our study of correlation metrics, and the identification of Spearman correlation coefficient as the most powerful among them for this purpose, establishes a complete and practical method for DNN clone detection that can be used in the design of new tools. It may also serve as inspiration for other program analysis tasks whose approaches break in the presence of DNN components.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

NeuralVis: Visualizing and Interpreting Deep Learning Models

Deep Neural Network(DNN) techniques have been prevalent in software engi...
research
11/14/2019

There is Limited Correlation between Coverage and Robustness for Deep Neural Networks

Deep neural networks (DNN) are increasingly applied in safety-critical s...
research
08/30/2019

An Empirical Study of the Relationships between Code Readability and Software Complexity

Code readability and software complexity are important software quality ...
research
11/22/2021

Graph-Based Similarity of Neural Network Representations

Understanding the black-box representations in Deep Neural Networks (DNN...
research
08/03/2021

Grounding Representation Similarity with Statistical Testing

To understand neural network behavior, recent works quantitatively compa...
research
12/14/2022

Backdoor Mitigation in Deep Neural Networks via Strategic Retraining

Deep Neural Networks (DNN) are becoming increasingly more important in a...

Please sign up or login with your details

Forgot password? Click here to reset