CheckNet: Secure Inference on Untrusted Devices

06/17/2019
by   Marcus Comiter, et al.
Harvard University
5

We introduce CheckNet, a method for secure inference with deep neural networks on untrusted devices. CheckNet is like a checksum for neural network inference: it verifies the integrity of the inference computation performed by untrusted devices to 1) ensure the inference has actually been performed, and 2) ensure the inference has not been manipulated by an attacker. CheckNet is completely transparent to the third party running the computation, applicable to all types of neural networks, does not require specialized hardware, adds little overhead, and has negligible impact on model performance. CheckNet can be configured to provide different levels of security depending on application needs and compute/communication budgets. We present both empirical and theoretical validation of CheckNet on multiple popular deep neural network models, showing excellent attack detection (0.88-0.99 AUC) and attack success bounds.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/16/2021

SoWaF: Shuffling of Weights and Feature Maps: A Novel Hardware Intrinsic Attack (HIA) on Convolutional Neural Network (CNN)

Security of inference phase deployment of Convolutional neural network (...
05/12/2020

Serdab: An IoT Framework for Partitioning Neural Networks Computation across Multiple Enclaves

Recent advances in Deep Neural Networks (DNN) and Edge Computing have ma...
04/02/2021

RABA: A Robust Avatar Backdoor Attack on Deep Neural Network

With the development of Deep Neural Network (DNN), as well as the demand...
10/13/2020

CrypTFlow2: Practical 2-Party Secure Inference

We present CrypTFlow2, a cryptographic framework for secure inference ov...
09/17/2019

Towards Efficient and Secure Delivery of Data for Deep Learning with Privacy-Preserving

Privacy recently emerges as a severe concern in deep learning, that is, ...
10/28/2019

Secure Evaluation of Quantized Neural Networks

Image classification using Deep Neural Networks that preserve the privac...
05/10/2021

SIRNN: A Math Library for Secure RNN Inference

Complex machine learning (ML) inference algorithms like recurrent neural...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Leveraging untrusted third party devices and resources to execute deep learning inference serves an important role, but poses serious security risks. Untrusted third parties can purposely manipulate inference outputs in order to attack the machine learning application, for example, by changing the result of a classification problem. Alternatively, third parties can be “lazy” and return random or previously computed outputs in order to avoid expending computational resources performing the actual inference calculation.

In order to allow for trusted inference computation on untrusted third party devices, there is a need to verify the integrity of the inference computation. Limited efforts to date in this field have restricted applicability or required specialized hardware. New methods are needed that are generally applicable to commonly used and deployed deep learning models.

To this end we introduce CheckNet, a general method for verifying the integrity of deep learning inference computations performed by an untrusted third party. Just as a checksum verifies the integrity of data, CheckNet verifies the integrity of a machine learning inference computation. Specifically, it verifies that a given inference computation has been correctly executed and that the results have not been manipulated. CheckNet allows for configurable security levels based on application needs while being applicable to a wide range of deep neural network models.

In order to be widely applicable and easily used, CheckNet is designed to be compatible with all common deep learning methods and not require any specialized hardware. Specifically, CheckNet has the following design goals: 1) fast to verify the integrity of the computation and output; 2) requires only minimal change and retraining (last layer only) of the original network; and 3) adds only minimal communication and computation overhead.

CheckNet verifies inference integrity with two techniques. The first, HashCheck, verifies that the inference computation is consistent with respect to the input. The second, CrossCheck, verifies that the inference output has not been manipulated. Together, these techniques are able to protect against attacks such as replay attacks (adversary returns an old but valid inference result), random attacks (adversary returns random results in order to avoid actually performing the inference computation), and targeted manipulation attacks (adversary changes the values of particular output nodes in order to change the resulting classification). Further, CheckNet is completely transparent to the third party. While the classification task is used in this paper, CheckNet is applicable to other deep learning tasks, as it acts on node values of the penultimate layer (e.g., prior to the softmax layer), therefore not tying it to a specific task.

We present empirical and theoretical validation of CheckNet on multiple popular deep neural network models under a wide set of attacks, showing excellent attack detection capabilities, and prove bounds on the probability of inference attacks evading CheckNet detection.

2 CheckNet

CheckNet is a general method to verify the integrity of an inference computation. Given an original network computation for network , CheckNet verifies for a given input that the inference computation has actually been executed on , and that the output has not been manipulated. To do this, CheckNet uses two techniques: HashCheck and CrossCheck. HashCheck ensures the inference computation has actually been executed on . It does this by constructing a pair of hash functions such that the bithash of inference input is approximately equal to the bithash of inference output , or . CrossCheck ensures the output has not been manipulated (e.g., by swapping output node values to change the classification result). It does this by obscuring the result of the model by outputting multiple intertwined sets of results, building in redundancy, and cross checking the results among themselves. Figure 1 shows an overview of CheckNet and its two techniques.

2.1 HashCheck

HashCheck ensures that the output returned from an inference calculation is consistent with respect to the input. This helps protects against attacks in which, for either adversarial or lazy reasons, third parties purposely return an output that is unrelated to the input. Examples include returning either an old but valid inference result, or randomly generated inference result, instead of the true inference result. An overview of HashCheck is shown at the top of Figure 1.

HashCheck provides a fast integrity check between the input and output by constructing a pair of hash functions relating the input and the inference output to a common bithash value. Specifically, hash functions are built such that the bithash of inference input is approximately equal (within a threshold) to the bithash of inference output , or . Further, the hash functions are built to have negligible computation cost compared with the network inference itself. In this paper, we use an MLP with one hidden layer whose output is a binary code as the hash functions. For the results in this paper with AlexNet krizhevsky2012imagenet on the 10-class classification problem, verification with HashCheck uses only approximately 1% of the MACs used for inference and only kB of additional communication.

The HashCheck training process involves no modifications to the underlying model and uses a dataset created by pairing the inputs with the output of the model, where is the size of the dataset. The two hash networks are derived in the following process: first, the

network is initialized as a random matrix followed by a binary quantization, therefore performing a random linear projection from

. This network remains fixed throughout the entire training process and serves as a source of randomness. Second, the

network is initialized as a single layer MLP followed by a sigmoid function, and is trained to map the input

to the bithash of the corresponding inference output .

is trained via backpropagation with loss term

, where is binary cross entropy loss, and is the output of with binary quantization applied. Both hashes are of length . Through this process, learns to map the input to the bithash code derived from the random projection of the output , as determined by . Since random projections preserve distances charikar2002similarity , HashCheck learns the hash function mapping into a bithash space where the distances between the outputs of the network are preserved. Any deviation from the true will result in a mismatch between the bithash codes. In order to increase robustness, multiple sets of functions are learned and used together. Section 2.3 describes how HashCheck detects attacks.

2.2 CrossCheck

CrossCheck ensures the true output of an inference calculation has not been manipulated or fabricated in any way that changes the prediction outcome. It does this by obscuring the original output through returning multiple redundant, shuffled sets of results, and then cross checking the results among themselves, which delivers additional robustness as well. An overview of CrossCheck is shown at the bottom of Figure 1.

To use CrossCheck, only the final layer of a model is modified and retrained to use CrossCheck. The process has three steps. First, the output layer is expanded to have additional output nodes. For a model performing -class classification, the output layer is expanded to have output nodes, where (shown as the “Expanded Output Layer” in Figure 1, where the output layer of the model for a 3-class classification problem is expanded to have nodes). Second, CrossCheck sets are formed, where each CrossCheck set (shown as the green, blue, orange and red sets of nodes in Figure 1) consists of nodes chosen at random from the output nodes in the augmented model. CrossCheck sets may overlap (i.e., the same output node may be used in different CrossCheck sets, and may correspond to different labels in the different CrossCheck sets), and some nodes do not need to be in any CrossCheck sets at all, serving as decoys. Only the model owner knows the CrossCheck sets. The third party has no knowledge of how many sets are used or membership of these sets. Third, only the last layer of the model is retrained on the training set such that each CrossCheck set correctly performs the -class classification task using its own nodes. As a result of training the modified model in this way, each CrossCheck set should independently output the correct classification, and therefore can be used to “cross check” each other, as discussed in Section 2.3. Changing the model in this way has no/negligible impact on accuracy.

This CrossCheck mechanism provides security by making it difficult for an adversary to manipulate the output from the model. For example, in a model without CrossCheck, if an adversary wished to change the model output, it could swap the maximum value in the output with another node, resulting in a different classification result. Under CrossCheck however, this attack would be difficult to execute because the adversary does not know which nodes to attack (because of obfuscation), how many nodes need to be attacked (because the level of redundancy is unknown), and the fact that mistakes are likely to be detected. Even in cases where the adversary learned information about the sets, efforts to exploit this information will still be detected by CheckNet, as shown in Section 3.

Figure 1: CheckNet verifies the integrity of inference computations using two methods: HashCheck (ensures result is consistent with input ) and CrossCheck (ensures result has not been manipulated).

2.3 CheckNet Attack Detection Mechanism

Figure 1 shows how CheckNet uses the HashCheck and CrossCheck techniques. Specifically, to use CheckNet to protect a model against attacks, the original model is modified to include the CrossCheck output layer, and a set of HashCheck hash functions are learned. The CheckNet-protected model is then given to the untrusted third party, which uses the model in normal fashion. The use of CheckNet is completely transparent to the third party. As far as the third party is concerned, the model is a normal model with an -dimensional output. Specifically, the third party does not know the hash functions, the true underlying number of classes in the original model, the number of CrossCheck sets , or the membership of each of the CrossCheck sets. The third party runs the inference normally, resulting in a -dimensional output.

This -dimensional output is then returned. First, an “unverified result” is obtained by collecting the classification vote of each CrossCheck set (i.e., applying a softmax to each CrossCheck set and selecting the class corresponding to the maximum) and using a majority vote. Second, the integrity of the result is verified using both HashCheck and CrossCheck. For the HashCheck verification, for each pair of hash functions , the bithash of the original input and the bithash of the returned output are obtained, and the bitwise distance between the two bithash values is compared to a threshold . If for some threshold , the result passes that particular HashCheck integrity check. For the CrossCheck verification, cardinality of the majority vote is compared with a threshold . If , the result passes the CrossCheck integrity check. If the result passes all of the HashCheck and CrossCheck integrity checks, the result is accepted. Otherwise, it is rejected.

Both techniques are needed to detect the wide range of possible attacks, which include “hard working/adversarial” attacks, which seek to subvert the system by changing the prediction outcome, and “lazy” attacks, which seek to avoid actually executing the inference computation (see Section 3.1 for examples of these attacks). HashCheck alone cannot detect small changes in the output nodes, as certain margins of changes to the output are needed to change the resulting bithash (as neural networks are robust to these small changes). CrossCheck provides the mechanism to thwart a “hard-working” adversary who may otherwise try to find tiny surfaces on the manifold that they can alter in order to subvert the system. Further, CrossCheck alone does not protect against attacks that return valid outputs that do not match the input, as its job is to detect if an output is valid or not, and so relies on HashCheck to detect the discrepancy with the input.

The computational and communication complexity of CheckNet are both small compared to that of the original model. The computational complexity of HashCheck is , where is the number of HashCheck networks, l is bithash length, and is the number of output nodes. The computational complexity of the protected model’s new last layer after being expanded by CrossCheck is , where is the number of output nodes in the model’s penultimate layer, as compared to the complexity of the unprotected model’s last layer. Further, the communication complexity of CheckNet communication is , where is the number of output nodes, while the complexity of the unprotected model is

. These hyperparameters can be adjusted to meet compute and communication budgets, as is further discussed in Section 

3.

3 Evaluations

We present empirical results for CheckNet under different attack models. We evaluate on multiple popular models, MobileNet howard2017mobilenets and AlexNet krizhevsky2012imagenet , and evaluate performance using the CIFAR-10 dataset krizhevsky2009learning . As MobileNet is a network specifically designed for mobile devices likely to run inference calculations for peers, these results are particularly salient. For all results, we use standard CIFAR-10 train/test splits (60k/10k samples, respectively). We also get results on the FashionMNIST xiao2017fashion and CIFAR-100 krizhevsky2009learning datasets, but observe similar trends and therefore omit them for space purposes. We train the models and generate results using a GeForce RTX 2080 Ti GPU.

3.1 Attacks Models

We evaluate the effectiveness of CheckNet under three attack models on an inference computation computed on input that should return :

  1. Random: a random inference result is returned, where the value of each output node is sampled from a distribution characterizing valid output node values. This attack would be used by a “lazy” third party not wanting to actually run the inference computation.

  2. Targeted Classification Change: the value of

    output nodes within output vector

    are changed to above the maximum value in an attempt to change the classification result. This attack would be used by an “adversarial” third party to change the result. Since this requires them to first run the original inference in order to obtain the result and then try to alter it, this attack requires effort, making it a “hard-working” third party attack.

  3. Replay: a different but otherwise valid inference result computed on a different is returned. This attack would also be used by a “lazy” third party. Additionally, if an “adversarial” third party was somehow able to learn about the number or membership of CrossCheck sets, their targeted classification attack would resemble a replay attack, as they would be able to closely approximate the output distribution of another class.

3.2 Attack Detection and Receiver Operating Characteristic (ROC) Curves

The receiver operating characteristic (ROC) curves for CheckNet’s detection ability for the three attack methods are shown in Figure 3 for MobileNet and Figure 3 for AlexNet. The curves are plotted in terms of true positive rate (TPR: rate at which attacks are successfully detected) and false positive rate (FPR: rate at which legitimate inputs are incorrectly detected as attacks). The results are shown for the following hyperparameter settings: 100 output nodes (), 30 CrossCheck sets (), 64-bit bithash length (). The ROC curves are generated by varying the CrossCheck and HashCheck thresholds resulting in different TPR/FPR tradeoffs that make up each ROC curve. Both thresholds are set by selecting seven threshold values evenly spaced between the minimum and maximum threshold values (CrossCheck threshold ranges from 0 to the number of CrossCheck sets , and HashCheck threshold ranges from 0 to the bithash length ), resulting in pairs of thresholds evaluated. A sample is accepted only if all threshold conditions are met, and is rejected otherwise. Threshold settings that result in a strictly worse TPR/FPR trade-off are discarded.

Figure 2: ROC curves for attack on MobileNet.
Figure 3: ROC curves for attacks on AlexNet.

CheckNet provides accurate attack detection of all three attack methods, achieving AUC scores on the random, targeted, and replay attacks of 0.98, 0.95, and 0.88 with the MobileNet model, and 0.99, 0.97, and 0.87, with the AlexNet model. The replay attack is the most difficult attack of the three to detect because the output is an otherwise valid inference output, just not for the given input sample. As a result, the CrossCheck technique is not applicable in this case (as its job is only to verify that the output is valid, which by definition a replayed output is), so only the HashCheck technique can detect the replay attack.

3.3 Effects of Hyperparameters on Attack Detection

Figure 5 shows the effects of different hyperparameter settings for the CheckNet under a targeted attack: number of output nodes (), number of CrossCheck sets (), and bithash length (). Generally, the robustness of CheckNet can be controlled by increasing 1) the number of output nodes () and 2) the bithash length (). Increasing the bithash length () improves attack detection (red vs blue curve), aligning with the theoretical findings in Section 4. Using a smaller number of CrossCheck sets improves performance (black vs blue curve), as having a larger number of CrossCheck sets increases the number of nodes that are used in potentially conflicting ways by different CrossCheck sets (as it increases the overlap between different CrossCheck sets). This loss in detection accuracy may be acceptable in order to gain the obfuscation benefits, as having more overlap between different CrossCheck sets makes it more difficult for an adversary to glean information regarding CrossCheck set membership. Increasing the number of output nodes improves attack detection (black vs yellow curve), which is also also explained by the fact that there is less overlapping when more output nodes are used for the same number of CrossCheck sets.

3.4 Robustness Against Undetected Attacks

CheckNet users can choose a TPR/FPR tradeoff along the ROC curve that suits the application via the and hyperparameters. Choosing a setting with a

will by definition result in some attacks not being detected. However, even in this case, CheckNet can still classify these “undetected attack samples” correctly in some settings due to the redundancy in the CrossCheck mechanism. Figure 

5 shows this result for different settings of and using “effective accuracy” (defined as the accuracy of the CheckNet protected network on the attack samples that are not rejected) as a metric under a targeted attack, compared with the accuracy of the original model with no attack (grey line). For the different threshold settings, even under the targeted attack, CheckNet protected networks reject most of the attacks (dotted green line and right-y axis), and achieve similar accuracy on the attack samples it fails to reject. Lower values of cause the CheckNet models to miss detecting more attacks, as shown by the green line denoting percent of attacks detected, therefore lowering the accuracy because relatively more egregious attacks that cannot be rectified via the CrossCheck majority voting are not filtered out, while higher levels filter out these harder attacks, leaving those that CrossCheck can still correctly classify even under attack. A similar trend is also seen with regard to , where lower, more stringent thresholds lead to higher effective accuracy than that of higher, more lenient thresholds. These results align with our theoretical analysis in Section 4, where we show it is harder for an attacker to launch a successful attack as increases or as decreases, resulting in a better effective accuracy overall.

Figure 4: ROC curves under targeted attack on AlexNet for different hyperparameters: output nodes (), CrossCheck sets (), and bithash length ().
Figure 5: Effective accuracy (left y-axis) and percent of attacks detected (right y-axis) for AlexNet protected by CheckNet. Other hyperparameters are set as follows: , , and .

4 Analysis

We derive theoretical models for robustness against attacks on the two CheckNet integrity verification mechanisms (HashCheck and CrossCheck) in order to show that it is hard to attack either of these components. This supports the empirical results in Section 3.4 showing it is hard for an attacker to avoid CheckNet’s detection mechanisms and successfully alter the classification results.

4.1 HashCheck Analysis

We consider the attack in which the third party tries to pass the integrity check without performing the inference computation (such as the “random” attack described in Section 3).

Definition 1.

(Successful Attack on HashCheck) A random attack is successful if an attacker can guess a that results in

Theorem 1.

The probability that the adversary can subvert the bithash integrity check (successful attack on HashCheck) is (where is the binomial CDF), or .

Proof.

Given a mapping from to , for every bithash , there exists a corresponding that maps to . To guess a bithash such that , an attacker needs to randomly draw -bit with its corresponding such that at least bits match with

. This is characterized by a binomial distribution with

trials, each of probability , i.e., the probability of a successful attack on HashCheck is equal to , or . ∎

4.2 CrossCheck

We consider an attack to change the classification result from class to class for some . To do this, the adversary must maximize the intra-set value of the node corresponding to class in at least of the CrossCheck sets.

Definition 2.

(Successful Attack on CrossCheck) An attack is successful if an attacker can find that results in the majority vote is greater than

We define the following terminology. “Overlap” refers to situations where nodes can be in multiple CrossCheck sets, and “No Overlap” refers to situations where nodes are exclusively in one CrossCheck set. “Knowledge” refers to when the adversary knows which nodes are used in at least one CrossCheck set, and “No Knowledge” refers to when the adversary does not know if a node is used in any CrossCheck set. In actual use, the adversary has “No Knowledge,” and we only consider “Knowledge” for the purpose of the proof. Let be an event that an attacker succeeds.

Lemma 1.

Proof.

An adversary selects nodes to maximize. There are possible ways to successfully choose target nodes. Without knowledge, there are possible sets of guesses to make. With knowledge, there are possible sets of guesses to make. Because , knowledge restricts the selection space, making an attack more likely to succeed with knowledge than without. ∎

Lemma 2.

Proof.

Without overlap, selecting one of the target nodes to maximize (set above ) has no impact on the efficacy of a second successful selection within a different CrossCheck set, as it by definition cannot be in the second set. With overlap, in the non-zero number of cases where the first successfully selected node is is also a non-target in a second CrossCheck set, subsequently selecting the target node in this other CrossCheck set to similarly maximize will result in a tie between the two selected nodes. This tie will have to be broken between the two nodes, which in some cases will result in the wrong node being maximized and the attack therefore being unsuccessful. Therefore, because there are strictly more scenarios in which node selection can cause an attack to be unsuccessful in the overlap case, the overlap case has a lower probability of being successful. ∎

Lemma 3.

Proof.

The probability of a successful attack with this knowledge is characterized by a binomial distribution with (number of CrossCheck sets) trials, where each trial has success probability , where is the number of nodes in each CrossCheck set. The probability that the adversary can subvert the CrossCheck integrity check with no overlap and the full knowledge of the CrossCheck sets is therefore (where is the binomial CDF), or . ∎

Theorem 2.

The upper bound on the probability that the adversary can subvert the CrossCheck integrity check (Successful Attack on CrossCheck) is (where is the binomial CDF), or .

Proof.

Following Lemma 1, Lemma 2, and Lemma 3 above, the upper bound on the probability that the adversary can successfully subvert the CrossCheck integrity check is (where is the binomial CDF), or . ∎

5 Related Work

CrossCheck training can be viewed as an application of dropout srivastava2014dropout . Each CrossCheck set is trained by “dropping out” output nodes not in its set, and the CrossCheck output can be viewed as averaging a set of sub-models. The voting method to obtain a classification from CrossCheck sets can be seen as an ensemble method, which is known to provide additional robustness and accuracy krogh1995neural .

Verifying inference integrity has been recently studied in limited contexts. Recently introduced methods either frame the problem in a constrained framework (e.g., modeling the network as an arithmetic circuit) or use specialized hardware. ghodsi2017safetynets

utilizes an interactive proof to provide proof of correctness, but is only applicable to neural networks that can be expressed as an arithmetic circuit. This limits applicability, as it cannot use common activation functions (e.g., ReLU) except in the last layer or common pooling layers (e.g., max pooling). In contrast, our method does not place any restrictions on the underlying model.

chen2018securenets uses a single layer at a time execution model to make modifications to the input and each layer’s weight matrix in order to provide privacy and verify correctness. It requires computing secure matrix transformations for each layer, sending the secure input and weight matrix for each layer to the third party, and returning the output for each layer to the verifier. This adds computational and communication overhead. In contrast, CheckNet outsources the entire inference computation to the third party, adding little computational and communication overhead. tramer2018slalom introduces a framework for partitioning inference calculations between a trusted execution environment (such as Intel™ SGX costan2016intel ) and an untrusted GPU, but requires a trusted execution environment. In contrast, our method does not have any hardware requirements, and is therefore applicable where specialized hardware is not available (e.g., IoT devices).

In contrast to all these methods, to our knowledge we are the first to propose a method for secure inference that is applicable to deep learning models in general without the need for specialized hardware. As such, our proposed CheckNet method will be applicable to all devices capable of running inference computations, without adding model architecture or hardware constraints.

6 Conclusion

We have introduced CheckNet, a general method for verifying the integrity of inference computations performed by an untrusted third party. CheckNet verifies that the inference computation has actually been performed and has not been manipulated by an adversary. CheckNet will enable the expansion and scaling of inference computations to untrusted third party devices and cloud providers in a secure manner. As machine learning is applied to increasingly critical industries such as healthcare and national safety, securing inference with methods such as CheckNet will become a critical component of all machine learning applications that rely on third parties.

The main advantages of using CheckNet for secure inference are its wide applicability, small overhead, and complete lack of prerequisites such as specialized hardware. CheckNet is generally applicable to deep learning models (e.g., there are no theoretical limitations on model size or architecture), and requires negligible modification to the model being protected. CheckNet adds negligible computational overhead to the inference computation, and the verification computational overhead is very small compared to the original network size. CheckNet also adds only negligible communication overhead, making it well suited to network-limited applications. These overheads can be changed via easily selected hyperparameters, which also allow users flexibility in choosing an appropriate level of security (via a TPR/FPR tradeoff) based on application needs.

CheckNet provides these security benefits through two mechanisms. HashCheck ensures that the inference output is consistent with the input. CrossCheck ensures that the inference output has not been manipulated, doing so by obscuring the output and building in redundancy that allows the integrity of the result to be confirmed. These mechanisms work together to thwart both “lazy” attacks that seek to avoid doing the computation and “hard-working” adversarial attacks that seek to change the outcome, both of which we have shown CheckNet can detect with high accuracy.

References