Deep-Lock: Secure Authorization for Deep Neural Networks

08/13/2020 ∙ by Manaar Alam, et al. ∙ University of Massachusetts Amherst 0

Trained Deep Neural Network (DNN) models are considered valuable Intellectual Properties (IP) in several business models. Prevention of IP theft and unauthorized usage of such DNN models has been raised as of significant concern by industry. In this paper, we address the problem of preventing unauthorized usage of DNN models by proposing a generic and lightweight key-based model-locking scheme, which ensures that a locked model functions correctly only upon applying the correct secret key. The proposed scheme, known as Deep-Lock, utilizes S-Boxes with good security properties to encrypt each parameter of a trained DNN model with secret keys generated from a master key via a key scheduling algorithm. The resulting dense network of encrypted weights is found robust against model fine-tuning attacks. Finally, Deep-Lock does not require any intervention in the structure and training of the DNN models, making it applicable for all existing software and hardware implementations of DNN.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

The remarkable success of Deep Neural Networks (DNN) has created numerous opportunities for commercial applications. Constructing accurate DNN models for targeted applications is non-trivial, as it demands domain expertise and powerful computing resources. On the other hand, the datasets used for training are deemed one of the most valuable assets of modern businesses. Consequently, modern business models consider trained DNNs as Intellectual Property (IP) cores. The presence of such IP cores both in embedded platforms (Google Coral, Intel NCS) and cloud-based Machine-Learning-as-a-Service (MLaaS) (Amazon AWS, BigML) creates multiple practical security concerns. One such potent threat is the stealing decision boundary of the IP Core (also called

Model Stealing), which has been addressed previously by several work (Tramèr et al., 2016; Yu et al., 2020). Businesses may suffer economic losses, as well as losses of brand value due to such incidents.

In this work, we address another threat model, which has been proposed recently in (Chakraborty et al., 2020). This is concerned with the model piracy of DNN models that may also result in significant economic losses for the original model owner. While model stealing may also lead to illegal usage, the problem of model piracy is more fundamental. An adversary may obtain/access a model from several other unwanted sources and continue using it as a black box (without bothering stealing its decision boundary). Unless there is some mechanism implemented for authorized access, this threat can never be mitigated. While this threat is applicable both in the context of MLaaS and embedded platforms, it is more severe in the latter case as the model can be easily extracted as a piece of software and distributed among multiple parties.

Although there exist techniques for watermarking a given DNN model for establishing model ownership (Adi et al., 2018; Guo and Potkonjak, 2018; Darvish Rouhani et al., 2019), they cannot prevent its illegal usage. Hence, an explicit locking mechanism for DNN is in demand, which will prevent unauthorized usage by making the model malfunction. Previous research in this area has mainly focused on obfuscating model structures (Xu et al., 2018). However, a more critical component of a DNN IP is the trained parameters. Most industrial applications typically use previously published DNN architectures, which have demonstrated high modeling capabilities but use different parameters depending on applications. Observing this, we present Deep-Lock, which enables a secret key-based locking for DNN parameters. Unauthorized access, without the knowledge of the legitimate key, makes the model unusable by severely degrading the model accuracy. Deep-Lock utilizes S-Boxes with good cryptographic properties to lock each trained parameter of a DNN model, which is found to be a lightweight approach for model locking, without causing any significant degradation in the inference time and model size. The secret keys required for locking are generated from a master key via a key schedule. The master key is only needed to be stored within the device. Moreover, the locking mechanism works on trained models and has no interaction with the training phase, making it generic and scalable for different model types. We have verified our claims over several industry-scale DNN models. We also considered improving the locked model accuracy by model fine-tuning, where a manifest dataset111The manifest dataset is a set of labeled data that is apparent to the user, though not necessarily a subset of original training dataset, but resembles sufficiently to the original training dataset. is used for improving the model accuracy even for wrong key values. It was found that model fine-tuning fails to improve a model’s accuracy to a practically significant value.

Deep-Lock bears certain similarities with a key-dependent hardware-assisted DNN IP protection scheme (HPNN) proposed in (Chakraborty et al., 2020). However, HPNN requires changes to be made in the model structure, which is not there in Deep-Lock, easing its usability. Furthermore, Deep-Lock uses established cryptographic constructs for locking, which is lacking in HPNN. It is worth noting that Deep-Lock differs significantly from state-of-the-art software product key schemes, where a key is verified before giving access to a locked software. The security of such schemes depends on the verification step, and there exist several well-known techniques to bypass this verification check and gain control over the entire product. Furthermore, once the verification step is passed, the unlocked model remains in the memory from where it can be stolen without much hindrance. On the other hand, Deep-Lock embeds the verification mechanism inside the DNN and mandates verification for every query made to the network. This never leaves the unlocked model in the memory, making the direct steal of the model challenging.

The rest of the paper is organized as follows: Section 2 discusses the adversarial threat model considered in this work. Section 3 presents the proposed methodology in details. The evaluation of performance and security of the proposed model is demonstrated in Section 4 and Section 5, respectively. Finally, Section 6 concludes the work.

2. Threat Model

The primary objective of this work is that a DNN model owner will provide services only to authorized customers who have acquired a license for model usage by paying an amount. To achieve the objective, the model owner encrypts the DNN model with a secret key. An adversary aims to use the encrypted DNN model accurately without paying a license fee to the model owner. In this scenario, we consider an adversary who has white-box access to the encrypted model, i.e., she knows model structure as well as all encrypted parameter values. The only information she does not know is the secret key used to encrypt the model. We also assume that the adversary has a manifest dataset, which she can use to query the model with a key guess to obtain an output. If the adversary uses encrypted parameters in the stolen DNN architecture, it produces wrong input-output mapping as she does not have a legitimate secret key.

3. Proposed Methodology

The proposed key-based authorization scheme Deep-Lock operates in two modes – offline mode and online mode. In the offline mode, a DNN model owner trains and locks a model, and in the online mode, the DNN model is deployed for practical usage. The overview of all the operations performed in the offline mode is as follows:

  • First, a DNN model owner acquires a massive set of data related to a particular task and expends a considerable amount of money, experts knowledge, and computational resources to label the data and train a model. The model owner can exercise any popularly used learning technique for fine-tuning model parameters to obtain the most accurate model.

  • The DNN model owner then selects a master key and uses a key-scheduling algorithm to generate different secret keys corresponding to each trained parameter. The parameters are then encrypted with an S-Box operation using the derived secret keys. The original model parameters are replaced with these encrypted values before deploying it for practical usage. The lock operation is shown in Algorithm 1.

  • Finally, the DNN model owner distributes the locked model to authorized customers for practical usage. In a cloud-based service, the secret key can be distributed along with an access to the locked model. However, in a deployed hardware device, the secret key can be embedded into a trusted platform module (TPM), assuming that an adversary can neither access the TPM nor retrieve the key from it.

Input : Set of real-valued trained DNN parameters: ; A master key ; A algorithm; A substitution-box mapping
Output : Locked DNN parameters:
= ;
for  to  do
       = binary representation of ;
       = ;
       = converted real value of ;
      
end for
Algorithm 1 Lock Operation

In the online mode of operation, a user needs to produce a master key during each query. The same key-scheduling algorithm used during the offline mode of operation takes the master key as input and generates decryption keys for each locked parameters. The parameters are then decrypted with the Inverse S-Box operation and the derived secret keys. The unlock operation is shown in Algorithm 2. Thus, for a correct master key, all the locked parameters will be correctly decrypted, and the model will produce a correct prediction. However, for an incorrect key the model will produce a wrong prediction.

Input : Set of real-valued locked DNN parameters: ; An input key ; A algorithm; An inverse substitution-box mapping
Output : Unlocked DNN parameters:
= ;
for  to  do
       = binary representation of ;
       = ;
       = converted real value of ;
      
end for
Algorithm 2 Unlock Operation

4. Performance Evaluation

The method is evaluated on Intel Xeon CPU E5-2690 v4 @ 2.60GHz with 56 cores. The Convolutional Neural Network (CNN) structures and the number of trainable parameters for each evaluation dataset are shown in Table 

1.

Dataset Network Structure
Number of
Parameters
MNIST C, MP, C, MP, FC, FC 86,166
Fashion-MNIST C, MP, C, MP, FC, FC 180,438
CIFAR-10 C, C, MP, C, C, MP, FC, FC 1,250,858
  • C: Convolution Layer, MP: MaxPool Layer, FC: Fully Connected Layer

Table 1. Datasets and Models

4.1. Accuracy of Locked Models

We have locked each model mentioned in Table 1 with a master key using Deep-Lock. Without loss of generality, we have considered AES SBox and AES key-scheduling algorithm to lock the models. Figure (a)a shows the original classification accuracies of the trained CNN models, the classification accuracies of the locked models for correct master key, the average accuracies of the locked models over 100 randomly selected incorrect keys. We can observe that Deep-Lock does not compromise the accuracy of the trained models.

We can also observe that, for a wrong key, locked models work as a random classifier.

(a)
(b)
Figure 1. (a) Classification accuracy of the original trained model, locked model with a correct key input, and locked model with a wrong key guess (b) Average prediction time of a single input for both unencrypted and locked model

4.2. Performance Overhead

The average response time of both unencrypted and locked DNN models for classifying a single input are shown in Figure (b)b. We can observe that the overhead is not significant for MNIST and Fashion-MNIST. However, we obtain a comparatively higher overhead for the CIFAR-10 dataset. The figures are based on the sequential execution of software implementations of the DNN models. However, these can be significantly improved with high level of parallelization in dedicated hardware accelerators.

5. Security Evaluation

In order to compare the security benefits with the recent lightweight hardware-assisted key-dependent DNN obfuscation framework HPNN (Chakraborty et al., 2020), we provide a security evaluation of Deep-Lock considering model fine-tuning attacks. In a model fine-tuning attack, an adversary’s objective is to retrain the locked model to an optimum parameter setting, which is different from the original, to obtain a comparative performance of the original model. The assumptions for an adversary in this attack are that she has the expertise and powerful computational resources to train any DNN model. To perform the model fine-tuning attack, an adversary obtains the DNN architecture and the model parameters from a locked model and utilizes the manifest dataset to retrain the model. The attack is considered successful if the adversary obtains high accuracy with a wrong key from the locked model. For this experiment, we have considered that the adversary has access to 10% of the training data. It is shown in (Chakraborty et al., 2020) that with this dataset, an adversary can fine-tune a DNN model locked with the HPNN framework to achieve 82.43% and 78.53% accuracy for Fashion-MNIST and CIFAR-10 dataset. We applied a similar strategy for Deep-Lock, and the validation accuracy over training iterations for all the models is shown in Figure 2. We can observe from the figure that even if an adversary has access to 10% of the original training data, he is not able to retrain the model with a comparable accuracy of the original model. In fact, the validation accuracies do not improve from a random classification.

Figure 2. Validation accuracies for each DNN model over training iterations during model fine-tuning

6. Conclusion

In this paper, we propose a lightweight, generic, key-based DNN IP protection scheme Deep-Lock using an S-Box and key-scheduling algorithms to defend against unauthorized usage of stolen DNN models. The method ensures that only an authorized user with a correct master key can accurately use the locked DNN model, and a wrong master key will provide random classification. Deep-Lock does not modify any structural details of a DNN model, making it scalable for all existing software and hardware DNN implementations without adversely affecting performance. We evaluated Deep-Lock for various DNN architectures and datasets. We have also demonstrated its robustness against model fine-tuning attack.

References

  • Y. Adi, C. Baum, M. Cisse, B. Pinkas, and J. Keshet (2018) Turning your weakness into a strength: watermarking deep neural networks by backdooring. In 27th USENIX Security Symposium (USENIX Security 18), pp. 1615–1631. Cited by: §1.
  • A. Chakraborty, A. Mondal, and A. Srivastava (2020)

    Hardware-assisted intellectual property protection of deep learning models

    .
    In Proceedings of the 57th Annual Design Automation Conference 2020, Cited by: §1, §1, §5.
  • B. Darvish Rouhani, H. Chen, and F. Koushanfar (2019) DeepSigns: an end-to-end watermarking framework for ownership protection of deep neural networks. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 485–497. Cited by: §1.
  • J. Guo and M. Potkonjak (2018) Watermarking deep neural networks for embedded systems. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pp. 1–8. Cited by: §1.
  • F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart (2016) Stealing machine learning models via prediction apis. In 25th USENIX Security Symposium (USENIX Security 16), pp. 601–618. Cited by: §1.
  • H. Xu, Y. Su, Z. Zhao, Y. Zhou, M. R. Lyu, and I. King (2018) Deepobfuscation: securing the structure of convolutional neural networks via knowledge distillation. arXiv preprint arXiv:1806.10313. Cited by: §1.
  • H. Yu, K. Yang, T. Zhang, Y. Tsai, T. Ho, and Y. Jin (2020) Cloudleak: large-scale deep learning models stealing through adversarial examples. In Proceedings of Network and Distributed Systems Security Symposium, Cited by: §1.